Many countries have experienced “technology leapfrogging,” where populations moved directly from having no phones to widespread mobile phone usage—skipping the era of landlines entirely. For end consumers, this was a clear leap. However, for service providers, the shift was less revolutionary. While providers avoided the costly task of wiring every household, the core work of enabling large-scale communication didn’t disappear; in fact, networks had to be more robust and scalable to handle the surge in data and voice traffic. Significant effort went into strengthening foundational technologies so that the infrastructure could support this growth.

Photo by Emilio Su00e1nchez Hernu00e1ndez on Pexels.com

Lately, I’ve been part of conversations, organisations urging to “leapfrog” with AI technology, mirroring the mobile phone revolution. While the enthusiasm is understandable, many underestimate the critical value of foundational IT systems. For mid to large organisations, adopting AI isn’t like the mobile leapfrogging where consumers moved straight to a modern tech. Skipping essential architectural elements—like solid API design, security frameworks, and enterprise integration—is akin to skipping the main course and jumping straight to dessert.

Building a scalable, secure, and maintainable AI-enabled system still requires strong foundations. Effective AI integration demands robust data pipelines, secure access controls, and clear interoperability standards. Ignoring these will lead to challenges in scalability, security vulnerabilities, and fragmented systems.

AI adoption is transformative but must be layered on a strong technological foundation. Just as mobile networks demanded fortified infrastructure behind the scenes, AI initiatives need reliable architecture to truly deliver on their promise without risking systemic issues.

When the entry barrier is lowered to try and create new things, there will be an explosion of people attempting to create a lot of new low effort poor quality outputs which I discussed about in my previous writings as Inverse vandalism. Gen AI arrived and pushed the outputs to slop territory. Every new tech is useful and makes lives easy, but channeling the effort to not product sloppy work is the key.

Photo by Google DeepMind on Pexels.com

How do we know that we are not producing sloppy work? My idea is to stay away from people who knows exactly what needs to be done. Collective intelligence and learning has always been superior than individual learning and intelligence. Many people are of the opinion that with the new age AI, tools they can reduce the dependence on humans (statements like we do not need programmers, AI will write everything), while they are just moving the workload from a deterministic abstraction to a non deterministic abstraction (at least for a few years). This means your plain english is a program, that will require linting to remove sarcasm, language analysis to remove ambiguity, differentiate between idiomatic expression and literal expression. I have just started, the list will go on because you have to bring everything else that applied to programming here.

It is collaboration, not blind automation; that will transform how we work with the latest AI tools. Treating these tools solely as automation risks producing sloppy, unreliable results.

I was connecting with an executive who wanted a certification program for lead developers to become architects. I wanted to understand the intent behind this and asked why are not they able to grow into their roles without a heavy weight training. His answer was, “If a nurse assists a surgeon in 100 surgeries, will the nurse be eligible to become a surgeon without being trained and certified as a surgeon?”.

It was a nice try, but in reality how many nurses have become a surgeon. All architects are developers, they always grow from being a developer into an architect. It is never a comparison. This also brings the question, is an architect a developer even if the architect does not write code? Yes; to start with, a solution with no custom code is the best code ever written. Having said that, code is the blueprint of an application and anyone impacting the blueprint is in an architect or a developer.

Photo by Christina Morillo on Pexels.com

Software industry is plagued with under training of developers with architecting skills especially softer skills like communication, time management, negotiation etc. Instead almost all focus is given to develop their coding skills to act as a replaceable coding unit who will unquestioningly pick up a task assigned and take it to completion irrespective of what the impact it will create. This defocus removes the developers from being aware of the domain, work with business/product to suggest alternatives.

Whenever I onboard trainees or juniors into my teams, I spend the extra effort to push them to see themselves as an architect. It looks like a formidable uphill task for them, most of them are quick to adapt, catapult their career and have a fulfilling experience. I too benefitted from this attitude by my lead very early in my days. I was told to ask for measurable business results on whatever task I was about to pick as a developer, that one question acted as a guiding light.

Every developer is an architect, every architect is a developer. It is just the experience that varies, it is not akin to nurses becoming doctors when they gain experience.