When I joined Thoughtworks, one of the first few things that I enjoyed was how the teams learnt and solved problems together. Until then, I was used to tasks getting assigned to me and taking it to completion. Asking for help was seen as a weakness, knowledge sharing was considered to be detrimental to career as you will be replaced easily. I was wrong, learning and sharing together which I came to know as ensemble learning made my work life very rewarding.

Photo by Agung Pandit Wiguna on Pexels.com

Simple things that ensured that we learnt as a group were

  1. Information radiators on the wall, omnipresent in the entire office. We even used to complain that we do not have enough walls.
    • Gotchas
    • Skill/Knoweledge matrix of who knows what and how much
    • Pair rotation matrix, to ensure that no silos form
    • Story wall and Release plan, to know what are the upcoming tasks in the near term
  2. Learning sessions
    • Collective code review and refactor sessions.
    • Deliberate KT based on Skill/Knowledge matrix
  3. Huddles
    • No questions asked, cry for help when stuck. This meant that if someone is stuck for more than 30 minutes they have to raise an alarm and entire team stops their work and jumps in to unblock.

Seems like a simple list, but it had a profound impact and kept the working life stress free and productive. In a remote first environment, the radiators can be managed with pinned messages on group chat. Learning sessions and huddles should happen as it can in physical environments.

Many countries have experienced “technology leapfrogging,” where populations moved directly from having no phones to widespread mobile phone usage—skipping the era of landlines entirely. For end consumers, this was a clear leap. However, for service providers, the shift was less revolutionary. While providers avoided the costly task of wiring every household, the core work of enabling large-scale communication didn’t disappear; in fact, networks had to be more robust and scalable to handle the surge in data and voice traffic. Significant effort went into strengthening foundational technologies so that the infrastructure could support this growth.

Photo by Emilio Su00e1nchez Hernu00e1ndez on Pexels.com

Lately, I’ve been part of conversations, organisations urging to “leapfrog” with AI technology, mirroring the mobile phone revolution. While the enthusiasm is understandable, many underestimate the critical value of foundational IT systems. For mid to large organisations, adopting AI isn’t like the mobile leapfrogging where consumers moved straight to a modern tech. Skipping essential architectural elements—like solid API design, security frameworks, and enterprise integration—is akin to skipping the main course and jumping straight to dessert.

Building a scalable, secure, and maintainable AI-enabled system still requires strong foundations. Effective AI integration demands robust data pipelines, secure access controls, and clear interoperability standards. Ignoring these will lead to challenges in scalability, security vulnerabilities, and fragmented systems.

AI adoption is transformative but must be layered on a strong technological foundation. Just as mobile networks demanded fortified infrastructure behind the scenes, AI initiatives need reliable architecture to truly deliver on their promise without risking systemic issues.

When the entry barrier is lowered to try and create new things, there will be an explosion of people attempting to create a lot of new low effort poor quality outputs which I discussed about in my previous writings as Inverse vandalism. Gen AI arrived and pushed the outputs to slop territory. Every new tech is useful and makes lives easy, but channeling the effort to not product sloppy work is the key.

Photo by Google DeepMind on Pexels.com

How do we know that we are not producing sloppy work? My idea is to stay away from people who knows exactly what needs to be done. Collective intelligence and learning has always been superior than individual learning and intelligence. Many people are of the opinion that with the new age AI, tools they can reduce the dependence on humans (statements like we do not need programmers, AI will write everything), while they are just moving the workload from a deterministic abstraction to a non deterministic abstraction (at least for a few years). This means your plain english is a program, that will require linting to remove sarcasm, language analysis to remove ambiguity, differentiate between idiomatic expression and literal expression. I have just started, the list will go on because you have to bring everything else that applied to programming here.

It is collaboration, not blind automation; that will transform how we work with the latest AI tools. Treating these tools solely as automation risks producing sloppy, unreliable results.