You have 1 article left to read this month before you need to register a free LeadDev.com account.
Estimated reading time: 6 minutes
The holiday season is an apt time to take stock and consider what worked and what didn’t in the previous 12 months.
For engineering managers, that reflection is likely to focus on the impact of AI, as well as personnel decisions they’ve made over the course of the previous year. But it also requires looking forwards.
Making predictions is a fool’s errand – but here are five uncomfortable realities that those in charge of engineering teams might have to contend with in 2026.
Your inbox, upgraded.
Receive weekly engineering insights to level up your leadership approach.
1. Your bosses will ask for evidence of AI’s impact
The cat is out of the bag when it comes to AI. Breathless news coverage about the benefits and boons of using AI for coding tasks have made their way to the C-suite – which puts engineering bosses overseeing a team of coders in a difficult position.
They’re likely damned if they do use AI, with increased pressure to show its impact, and damned if they don’t use AI, with those in charge asking why they’re ignoring the ability to supercharge productivity and performance. “If you had a tool to reduce the time it took you to do things by 50%, you would use it,” says Heather Meeker, an attorney at Tech Law Partners.
But that’s not necessarily what people are seeing on the ground. Surveys show that most developers believe AI is helping them, but hard evidence is thin on the ground. In Stack Overflow’s 2025 survey, around 70% of AI agent users said the tools reduced the time spent on specific tasks and 69% agreed they had increased their productivity, yet only 17% thought they had improved team‑level collaboration. And as LeadDev covered at the time, these tools might not be making developers as productive as they think.
It’s also likely that there just isn’t concrete evidence for organizations to decide on whether or not pilot projects using AI tools can be extended for the longer term. While we’re now entering the fourth year of the post-ChatGPT era, it has taken some time for companies to formally adopt generative AI tools within their organizations.
More like this
2. At least one major incident will be traced back to an AI coding tool
AI has already entered the software development life cycle, but as it becomes even more part and parcel of software development, the risk of vibe coded errors causing major incidents will rise.
AI-generated code is responsible for around one in five security breaches discovered, according to research by Aikido Security. Other research suggests that AI coding assistants can be up to four times faster than humans alone, but also ship code that is 10 times riskier than doing it by hand. There is a real risk that untested AI-generated code is a disaster waiting to happen.
It’ll be ever more incumbent on leaders of those software teams to ensure that their organization isn’t the one shipping unsafe code into production – or mitigate the risks so that they get as close to zero likelihood as possible.
How can they do that? “The first step is educating the developers,” says David A Wheeler, director of open source supply chain security at the Open Source Security Foundation (OpenSSF).
Wheeler admits that there are many unknowns in the early adoption of AI, but across the industry there is still plenty of knowledge about how to use AI. “The problem is that the software developers don’t know, because no one taught them,” he says. The OpenSSF has developed a free course, called Secure AI/ML-Driven Software Development (LFEL1012), specifically to teach people how to add AI into the software development life cycle in a secure way.
Getting ahead of the issue before it arises is key, says Wheeler. “Auditing later is nice, but there’s no point in auditing a broken process – at that point you already know it’s broken,” he says. “Your audits need to verify that you’re doing the right things, and that requires that people know what the right things are.”
3. Regulators and auditors will start asking awkward engineering questions
There’s also a real likelihood that the use of AI in engineering will start to be more closely scrutinised by those in power – asking questions about why and how AI needs to be used at all. “Things will greatly vary depending on the kind of software being developed,” Wheeler explains.
Wheeler points out that “in most cases, the impact will depend on risk to individuals and to society.” He says that “it’s reasonable to expect more regulation focused on AI use in [already highly-regulated] fields.” One example of the future potential direction of travel, Wheeler says, is the EU’s AI Act, which has a category of “high risk” AI systems with additional requirements.
Leaders also ought to look out for the EU’s Cyber Resilience Act (CRA), which while not specifically focused on AI, does focus on developing secure software. That makes it possible that it could drag the use of AI into its regulatory net. As a result, managers overseeing staff that use AI tools will need to guarantee there’s an audit trail of what happened when, and by whom, in the event of anything going wrong and regulators knocking on their door.
4. Junior hiring will get harder – but internal upskilling will finally be taken seriously
LeadDev’s AI Impact Report 2025 uncovered a worrying development: 18% of respondents imagined that their organizations would be hiring fewer junior developers in the coming year, while 54% felt that in the long term junior dev hiring would drop as a direct result of the rise of AI coding tools.
It all means tough times are ahead for those entering the industry, but it also puts a focus on managers to think about how they can retain and upskill those staff who they do have.
The need to keep skills up-to-date is vital because the industry is moving so quickly – and staff need to be trusted to keep on top of the challenges that they face. “Today’s digital complexity demands a fundamental shift: embedding quality from the start, redesigning processes and building confidence that your software is reliable, resilient, and truly solves business problems,” says Andrew Power, head of UK and Ireland at software testing firm Tricentis. Power says that is “the only way to unlock real value in the AI era.”
That means internal upskilling is more important than ever, including person-to-person connection with managers to try and help staff understand what their value is, and how they can continue to develop their skills in the face of ever more AI use.
“Emotional intelligence is key,” says Cary Cooper, a management professor at the University of Manchester. “AI is likely to take over more of the technical bits we do in jobs, but it can’t match the ability to properly develop staff.”
5. The best engineers will be the ones who know what not to automate
Tools, apps, and operating systems with built-in AI assistance are becoming the norm – with the companies behind them betting on the idea that if AI is just a click or a tap away, then people will start to use it more often.
But the best engineers, who are invariably the ones businesses want to hire and retain, are those that recognize the potential AI can provide but also know when not to adopt it. And more than that, they know which AI models to use and when. “You need to make sure that you’re using big professional models, and it’s best to use the ones under the company accounts instead of the personal accounts,” says Meeker.
It’s also about deciding when not to use AI tools to try and support coding skills. With increasing distrust of the quality of code these tools produce, it can often be a false economy to try and outsource work to automated AI systems – in large part because workers might spend more time undoing the errors that ensue than they would in hand-coding it correctly the first time.

London • June 2 & 3, 2026
Rands, Nicole Forsgren & Matej Pfajfar confirmed
So as 2026 becomes a year in which AI adoption increases further and bosses start to look for more evidence of how it can improve their working practices, the boldest and best engineers may be the ones that know when to put down the AI as much as when to pick it up.