A fighter is plane flying at 1000 kmph fires a few bullets which leaves the barrel at 2000 kmph, the plane then accelerates to 2000 kmph and catches the bullets back. Is this statement true of false?

If the above question is asked to a primary school student who sees only the facts in the statement, the answer is likely a NO but a student who has understood physics and all the dynamics that are in play will certainly bring in air resistance, terminal velocity and acceleration into the equation and say it is possible. The reason why the plane can catch the bullets is – a bullet will be subjected to air resistance and lose its forward velocity and fall towards the ground, while doing, so without any external force it will reach terminal velocity because of more denser air towards the ground; the plane can dive faster and break terminal velocity because of its thrust and will eventually over take the bullets given this happens at a sufficient altitude. It happened in real so it is not a theoretical possibility, check this link.

Many of the professional management and executives in software development I observed belong to the static understanding of agility in software development. They are well versed and equipped with a lot of terms and concepts which in their mind are non negotiable and have to be followed to the dot. Examples – estimations using fibonacci numbers only, planning only for repeatable velocity iteration over iteration, enforcing only either solo or pair programming, locking main branches and allow only PR through gated reviews. The list goes on without addressing the nimbleness needed.

Photo by Startup Stock Photos on Pexels.com

The management’s mindset is opposite to agile philosophy, they treat the development team like resources to be utilised and deal them as numbers. The best software teams are the ones that understand business and measurable results to back some decision, which will help them to continuously modify the software to facilitate business goals. If a business idea takes about a month or more to implement and test it out, more or less there isn’t any agility, it is just that people are convinced that they are doing the so called AGILE PROCESS. If we have the right ways of working (not process) then barring major platform work most of the idea to production cycles should be within weeks or days timescale, not in months.

What gives agility? Before we get to that, we should answer what kills agility. My take is, two things that kill agility are (1) professional management and (2) certifications/processes/frameworks claiming to be AGILE. Professional management is conventionally equipped to deal with industrial and manufacturing, they always start from there which is an efficiency oriented management style and adapt to an effectiveness oriented management style.

When people are dealing with software development as if it is a static system, things appear in black & white and easy to understand and predict. It is hardly the reality, software development is a complex dynamic system which inherently has to be driven by people on the ground than the management. The focus for management is to be a facilitator to ensure that the staffing is good, right tools in place, work environment is not toxic and remove the impedance for business knowledge & communication. As much as possible avoid the cargo cult way (a.k.a. some major agile framework) of managing software projects, understand the dynamics at play and plan based on that.

Some things that has worked so well for me

  • Very good individual contributors who excelled at their work were better leaders as managers than professional managers from reputed schools
  • Long running small and stable team compositions achieve big results compared to frequently churned or large teams. They may start slow but accelerate to a good velocity which sometimes is hard for the business to keep up
  • Certifications and frameworks in the AGILE business do a lot more damage than help
  • There is no substitute for XP practices, continuous delivery and engineering rigour
  • Developers who understand business contribute multi-fold than business people who understand tech

LLMs are trained on public data, the limitation to grow is not the hardware or parameters but it is the quality of content that is available to further train the model. With these new tools, newer content will be generated at a pace that can’t be consumed easily by humans and we may need to use these tools again to summarise and create action items for us to follow. A new risk that has emerged is that we will be stuck in a content whirlpool where these tools create more content based on the content they had already created.

Photo by Q. Hu01b0ng Phu1ea1m on Pexels.com

This is similar to what many algorithmic feeds are doing to us already. You get thrown similar content that you have watched, songs that you have heard and articles that you have read. Those algorithms can be worked around by going private mode at least for that window so that the serendipity factor increases. The LLMs and similar tools in a way work from a same public data set, making them behave like coupled systems. The more self generated data they feed on the public domain the more synced they are. This is very similar to what is explained in Kuramoto model. When the underlying foundation is the same and more content generated is being added to the same foundation from different models, then models may begin to converge on what they can generate.

There are paths possible that this convergence does not happen. One is to start getting into personal and proprietary data which is out of the bounds of the models now. Who gets access to what will start getting to matter a lot, our personal data is a goldmine and will be monetised well. The other is to advance the technologies to start reason well with first principles and heuristics, this will require less magnitude of data and may be years away. So the first option to getting into private data is more of a possibility, entities with fiduciary responsibilities of data will be tempted to go for legal ways of monetising which can be loop holes that may not be in the best interest of a layperson.

Before more reasoning capabilities are built, it is better to live in the content whirlpool than feed the private lives to insane computing power of these models.

I come across a lot of people who proudly claim that “I am a process person” and say processes are nothing but practices that are standardised. With more and more people coming into software management after a 3 day course to claim themselves as a master in managing software development, the more cargo cult processes come to dominate the industry.

Processes are necessary and they bring a predictable output for a given set of static conditions. It is useful for working on predictable items like visa applications, pasteurising milk, preserving food, approving loans etc on a scale. Practices on the other hand deal with dynamic systems, it is like the race drivers going for a track sighting before the race or the chefs using only simple english to talk to their cooks. Practices are negotiable, it is backed with a value and intent and requires discipline to get it followed while processes are non negotiable and usually enforced through a compliance mechanism.

Photo by Ellie Burgin on Pexels.com

Processes are helpful as an abstraction, when the person following the process does not worry about what they are doing as long as they are compliant. A good example in the cooking space is the difference between a cook in a fast food chain’s kitchen vs a cook in a fine dining restaurant. The fast food chain cook will always set the oil temperature to the exact specified temperature, fry the pre cut potatoes (which were cut to certain specifications) for the exact amount of seconds prescribed and put it out on the plate. This can be done by cooks with zero knowledge of cooking (easy to train and staff), if something goes wrong or any step in the process has been missed, then you have run into a mess. If the potatoes are of a different cultivar, the production stops. There is no resilience but it is incredibly efficient.

On the contrary a cook in the fine dining restaurant may have a set of practices like waiting for the oil to come to near smoke point, test frying a piece of potato for tasting it, then adjusting the cut sizes if required to perfect the fry before serving the meal. There is so much of resilience but it is inefficient compared to the fast food chain. You also need very capable and knowledgeable people.

Both processes and practices have their respective places, it is trouble when people use it interchangeably in software development. Majority of the development related tasks need sound practices based by value and intent in place. If a practice is not feasible, replace with another which will help realise the value. If clean code has to be ensured, a process person will put a detailed code review process in place with hierarchy of responsibilities and often keep code under lock and key. A person who is keen on the intent and value of code reviews will come up with many different practices like integrating lint and static checks in IDE, pre commit and pre push hooks to catch obvious guideline deviations early, mob reviews once in a while and spot refactoring etc. Each of the practices are negotiable and interchangeable as long as the value and intent remains the same.

The fast food chain approach does not work for all aspects of software development. Processes expect many things to be static and developers need not know the big picture. Processes require work that can be broken down, carried without knowledge from other parts and can be assembled later. Instead in reality there is so much of interaction between broken down work and you need developers who are well skilled in both communication and technology to get the job done in a dynamic landscape, along with the big picture. Practices are not processes.