London

June 2–3, 2026

New York

September 15–16, 2026

Berlin

November 9–10, 2026

Stop throwing AI at developers and hoping for magic

Engineering leaders reveal a stark gap in AI adoption.
March 16, 2026

You have 1 article left to read this month before you need to register a free LeadDev.com account.

Estimated reading time: 7 minutes

Key takeaways:

  • AI adoption is fragmented and widening performance gaps: some teams report dramatic productivity gains, others struggle with basic prompting. 
  • Tools alone don’t drive productivity – methods and training do: simply rolling out AI tools doesn’t guarantee results.
  • Treat AI adoption like a scientific experiment: leading teams don’t just measure AI usage, they measure outcomes.

Behind the headlines, AI adoption in large organizations is messy, uneven, and often ineffective. Engineering leaders typically struggle to bridge the gap between early adopters seeing massive productivity gains and teams still figuring out basic prompts.

For the past year, we’ve been speaking almost weekly with platform and engineering leaders from large organizations who are members of the HangarDX community. Every conversation revolved around the same topic: AI adoption. 

We hear public claims like “60% of code at company X is written by AI,” but the fine print usually reveals that autocomplete suggestions are counted as “AI-assisted code.” We hear stories of teams building in hours what would have taken days. In most cases, it was some new internal tool.

As most developers know, software typically doesn’t start from a clean slate. It requires working with complex dependencies, technical debt, and years of accumulated logic. In real environments, the AI adoption picture is varied and a lot messier. 

Fragmented AI adoption and lost knowledge

One problem large engineering organizations face is that AI adoption is often fragmented. Half of engineers report 10x productivity, while some are still figuring out prompts. Engineers spend weeks determining the best way to prompt Claude Code and give context to agents but work in isolated workflows and knowledge silos.

Yegor Denisov-Blanch from Stanford University has researched how AI affects developer productivity in more than 600 companies and noticed that pattern.

“Typically within an enterprise, there would be teams that move really fast and teams that move really slow,” he says. “Some of that difference in speed is explainable by their differences in team size, product, or codebase maturity, but often a lot of the productivity gains come from people experimenting and figuring out ways of doing things. There’s no playbook; there’s no consensus around anything.” 

Since there are no frameworks for sharing AI knowledge in engineering organizations, firms risk losing expertise when some engineers leave.

On the other end of the spectrum are companies throwing AI at developers and expecting them to figure it out.

Rahib Amin, senior technical product manager at Thoughtworks, warns of the shiny new prototype case. “On the surface, it solves the problem. However, behind the curtain are thousands of lines of spaghetti code, forcing the developers who inherit these code bases to make unfortunate, difficult decisions,” he says.

Some teams are even replacing working solutions for untested and unregulated, black-box solutions simply because they’re “AI,” he adds.


Using AI is like driving an F1 car

Large organizations are making the same mistakes with AI that they’ve made with every platform before it. They invest in the tool but not in the people, according to Bryan Finster, a DevOps and continuous delivery advocate with over 25 years of experience.

“We’ve seen the same with the rise of infrastructure platforms and developer platforms, where the tools are deployed without the training and, at best, things don’t improve,” he says. “At worst, costs skyrocket, and quality and security are compromised.”

He cites a recent study that found that teams see an initial slowdown in productivity when even experienced developers start using AI tools. 

“Of course they are!” Finster says. “They are demanding that people use these expensive tools without providing them with any training on how to use them effectively. It requires a higher level of engineering discipline than most companies are accustomed to in order to get the best from these platforms. They should be investing in their people before investing in tools.”

Experts agree that AI amplifies whatever already exists; strong engineering practices become stronger, but weak practices are amplified too. Amin advocates for frameworks that enable experimentation across organizations and a return to disciplined approaches like test driven development.

Using AI to assist you with writing code is like driving a Formula 1 car, Finster says.

“It takes the right kind of skill, and mistakes happen quickly if you get careless,” Finster adds. “However, in the right hands and with the right engineering rigor, you win.”

New tools require new methods

Engineering leaders should look beyond tools that are readily available as part of their suite, says Punit Lad, lead consultant for platform engineering at Thoughtworks.

Meanwhile, Finster stresses that the biggest misconception companies have is that productivity gains will appear just by giving engineers AI tools.

“New tools will always require new methods,” he says.“Give someone a nail gun with no context, and all they have is a less ergonomic hammer. It’s heavy, and you can pound nails with it, but that’s not how it should be used.”

Some people will fear for their jobs. Some will struggle. Organizations need to understand why people struggle and invest accordingly, Finster adds.

Most importantly, engineering leaders should stop thinking (and saying) that AI will replace the engineering workforce, says Amin. There is also the trust component, Lad points out. A tool that developers don’t trust is a tool they won’t use.

“Make sure that AI tools are really helping and solving the problems that your organization has,” Lad says. “AI can be wrong and can make mistakes, and if the people using it can’t trust it, your organization’s culture won’t adopt it and ultimately push back.”

Empower small teams and run experiments

Since there are no playbooks or new models, and with ways of working changing every couple of months, the pragmatic approach is internal knowledge sharing, says Denisov-Blanch.

“I encourage leaders to look inside their organizations and see what teams are more inclined to learn and experiment. Extrapolate those learnings and cross-pollinate them onto other parts of the organization,” he says.

Large organizations are, in practice, just networks of small teams, Finster says. “The approach to meaningful AI adoption is defining features as business test scenarios. Use those tests to drive development, working in small batches to control the quality of generated code, and delivering for feedback. These are the same behaviors high-performing teams use for implementing continuous delivery.”

Building software with AI agents isn’t a solo sport, says Ankit Jain, co-founder of Aviator. This is particularly true in large organizations where projects touch multiple repositories and services. Prompts, feedback cycles, and agent decisions must be shared, reviewed, and stored. Without workflow structure teams lose context, repeat work, and struggle to scale results.

“Teams have always been practicing some kind of knowledge sharing, whether code reviews or pair programming, Jain says “However, if we look at patterns of AI-assisted coding, we do everything the opposite way: work on the prompts, provide feedback to the coding agents back and forth, generate the code, submit the code for review, and then throw away the prompts.”

These prompts are the context; these prompts are the tribal knowledge. It’s time we start preserving these prompts, Jain adds.

Don’t treat AI as a solution searching for a problem

Simply saying that your team is using AI for a certain percentage of code generation or number of queries per week/per month is meaningless, says Denisov-Blanch.

The teams that excel with AI tools, according to his research, don’t just track usage but also measure the result of that usage.

“They treat AI like a scientific experiment,” he adds. “They have hypotheses, which they test and measure, and they are disciplined about using AI for certain things and not using AI for other things.”

LDX3 New York lineup

Look beyond code generation

Most organizations start with code generation because it’s visible, measurable, and easy. The broader opportunity spans the entire software development lifecycle.

Lad points to other domains like automated compliance and deployment updates, improving Site Reliability Engineering (SRE) workflows through pattern detection, generating and tuning observability monitors, or security tooling that proactively adapts to new threats.

Operations is an obvious win – using agents to monitor events in production and predict failure, reducing the mean time to repair, or using AI to suggest improvements to the quality gates in pipelines to improve feedback, Finster says.

Code migrations seem like the obvious candidate, says Jain. They’re messy, time-consuming, demand precision, and usually the “harder job” that no engineer is eager to do. AI-assisted code maintenance could work well with framework upgrades, internal Application Programming Interface (API) deprecations, and security fixes.

It’s not realistic to expect that Large Language Models (LLMs) will undertake end-to-end code migrations, Jain adds. “Realistic expectations, a spec-driven development approach with structured workflows, and a ‘human-in-the-loop’ strategy could help to offload the bulk of migration work to AI while keeping humans in control.”

The ‘rich getting richer’ effect

In his research, Denisov-Blanch found that the gap between the teams who have figured out how to harness AI tools and those who haven’t is widening. He calls it the “rich getting richer” effect:

“We see in the data that there is a divergence between the top performing teams and the bottom performing teams. Those teams that know how to use AI continue to compound their gains, whereas laggards are staying behind.” 

Organizations will need to invest time and effort in identifying why teams are struggling with the tools, Finster concludes. “My experience so far is that most organizations don’t do that. They will be outclassed by competitors who do.”