You have 1 article left to read this month before you need to register a free LeadDev.com account.
Estimated reading time: 7 minutes
The tech industry sits at a precipice where initial AI-everything fever gives way to a more complex, systemic reality.
AI is no longer at the periphery of our work. According to the 2025 DORA report, 90% of respondents have integrated it into their day-to-day, with 80% reporting increases in individual productivity.
For leadership, the focus has shifted from wondering if AI works, to figuring out how to manage “uneven progress” at scale. True cross-company use cases remain elusive, as both data and experimentation silos persist.
The most significant lesson of this past year, as fractional CxO Paul Johnston pointed out, is that, while AI tools provide “very solid support for humans,” they cannot yet replace them for most tasks.
We’re now stuck in a productivity paradox, where writing code happens faster than ever, but shipping it remains stalled by the traditional hurdles. As Niklas Gustavsson, VP of engineering at Spotify, observed, “AI on its own doesn’t change much,” and the real gains come from taking a systemic view.
Atlassian CTO Rajeev Rajan predicts the near future will see AI agents integrated into every stage of a developer’s workflow, from initial planning and design, to production and incident management.
This shift promises to empower developers to be more creative and self-sufficient, but it also requires engineering managers, CTOs, and founders to lead with a new level of strategic clarity. To succeed in the coming year, leaders must move past “AI at any cost” to a focus on the durability, security, and reliability of their underlying systems and team structures.
Your inbox, upgraded.
Receive weekly engineering insights to level up your leadership approach.
When speed becomes the bottleneck
As much developer work shifts from writing code to validating code, the review queue extends outside of working hours.
“We have gone from zero to product in less than three months thanks to AI coding,” said Fergus Kidd, CTO of new field operations AI tool FieldPal.ai. That speed has brought its own challenges around code reviews, and creating and running tests. “AI coding tends to make things messy and unmanageable too fast,” so “where AI code is generated, we are mandating it is reviewed by a human, but this creates a bottleneck where code is working but can’t be deployed, with a backlog of thousands of lines.”
This startup isn’t unique in trying to keep up with the pace of AI-driven software development.
“AI has absolutely made us faster, but it has also introduced new bottlenecks in places we didn’t expect,” said Subho Halder, CEO of scale-up Appknox. “The work moves faster at the start of the lifecycle, but the pressure has simply relocated to QA, validation, and oversight,” he continued. “We’re also seeing higher cognitive load when developers rely on AI-generated code they didn’t fully write, which can slow debugging and long-term maintenance.”
Plus, as a security company, he said the necessary extra governance around tool usage and data handling adds to operational overhead.
Enterprise challenges
Enterprises are feeling these bumps in the road to production too.
With 90% of Spotify developers using AI every day, Gustavsson said they saw a 30% increase in code change per developer. Like just about every organization we spoke to, this increase in AI-driven speed also led to an increase in quality concerns and code review time.
Spotify’s engineering leadership took a step back to examine AI as a systems design problem, not a tooling problem. They ended up deploying background coding agents to manage Spotify’s software fleet and applying guardrails around test automation and verification to enable LLMs to catch failures.
“The lesson from 2025 is that if you want durable productivity gains from AI, invest as much in reliability, review, and developer experience as you do in the tools themselves,” Gustavsson said.
The biggest AI gains come with embracing discomfort, argued Laura Tacho, CTO of DX. “Success or failure with AI is largely dependent on existing systems, processes, and engineering practices,” she said. “Orgs that have already been investing in a great developer experience are seeing success with AI. But companies that view AI as a magic fix are looking around at even bigger problems now.”
Measurable success with AI comes when you focus on people and processes, not just the shiny new tech.
More like this
The change management mindset
Hiring for AI-native developers is not about hiring for experience with AI – in most cases that’s impossible in light of how nascent the technology is. You need to hire for and cultivate a learning mindset.
“The greatest returns on AI investment come from a strategic focus on the underlying organizational system,” said Nathen Harvey, DORA lead at Google Cloud. “Without this foundation, AI creates localized pockets of productivity that are often lost to downstream chaos.”
We know that the success or failure of AI rests on collaboration. If 95% of AI pilots failed in 2025, as MIT asserted, a lack of contextual learning and experiments getting stuck inside data silos are likely to blame.
If you want a positive return on your likely massive AI investment, leadership must facilitate cross-organizational collaboration.
“People learn from each other – from hearing each other’s experiences, sharing their own and taking what they’ve learnt and applying it to their own situations,” said Melinda Seckington, leadership coach and trainer at Learn Build Share and an organizer of the London LeadDev meetup.
Managers also need to look across departments to amplify cross-learning in this time of extreme change.
“As folks grow in their careers, they’re less likely to find those peers within their own teams – they’re more likely to be the only one in their roles, with no one in their org doing exactly what they are doing,” Seckington says. “It makes it all the more important for people to find those peers, and build up those support networks.”
This also is the best way to unlock that cross-organizational data that will start to open up AI success cases, like with 360-degree views of your customers.
And, while hiring has slowed over the last couple years, you have to future-proof your org for when people inevitably leave.
“My biggest lesson this year was that a company scales exactly as fast as its foundations allow. AI, talented engineers, new initiatives, none of it matters if the underlying system isn’t resilient,” reflected Appknox’s Halder. “When key people left, we had a moment of truth: were we built for heroics or for durability? 2025 forced us to choose durability, and it’s been the healthiest shift we’ve made.”
Leading with calm and trust
While there is more demand than ever for formal AI usage policies, leadership strategy tends to be grounded in trusting your developers to figure it out and fostering open communication around AI wins and losses.
Honeycomb CTO Charity Majors said that the observability tooling company’s internal policy is inspired by Intercom CTO Darragh Curran and his famous 2x memo from last year. This placed doubling productivity by using AI as an explicit priority and goal.
“Some folks were already all-in on AI, others hadn’t touched it. We wanted folks to be thinking creatively from the ground up about how AI might – or might not – be able to improve their productivity,” Majors said.
This tactic isn’t about forcing developers to do what they don’t want to. Or about punishing folks who don’t double their productivity. It’s about getting staff excited to experiment and learn.
“Mostly, I would say [AI] lets us run more experiments and try more things from a product perspective,” Majors said. “But it’s different all over the company.”
AI wins need to be shared across organizations. But the AI losses need to be amplified even more.
“The biggest lesson we learned is that you need to move with urgency and be ready to step away when something isn’t working,” said Helen Greul, VP of engineering at Multiverse.io. “AI projects in particular reward fast learning cycles. The sooner you see the real impact – or the lack of it – the sooner you can double down or pivot. Knowing when to let it go is very important.”
Look at proactiveness as a signal, Seckington recommended. “They’re the ones who pay attention, spot problems early, ask the questions that are needed, and take action without waiting to be asked. It means less time is wasted building the wrong thing or having to fix things later down the line.”
The most successful AI policy comes down to trusting your engineers, encouraging them to safely experiment and sharing lessons learned.
João Freitas, general manager and VP of engineering for AI and automation at PagerDuty, said that, in 2025, AI became a default part of the engineer’s daily workflow, across coding, testing, documentation, migrations, and analysis. He claimed that the use of AI in multiple projects has already allowed PagerDuty to unlock hundreds of work hours.

London • June 2 & 3, 2026
Rands, Nicole Forsgren & Matej Pfajfar confirmed
“The gap between leaders who understand AI’s implications and those who don’t is widening – and that’s going to create organizational turbulence,” Gruel said
“It’s easy to get swept up in the excitement of AI, but the biggest differentiator in 2026 will still be leadership that models calm, clarity, and structure. Technology is accelerating; people need steadiness. The teams that thrive will be the ones whose leaders remember that.”