New York

October 15–17, 2025

Berlin

November 3–4, 2025

London

June 2–3, 2026

AI-assisted coding and unsanctioned tools headline 2026’s biggest security risks

Predicting the biggest security threats in 2026.
December 30, 2025

You have 1 article left to read this month before you need to register a free LeadDev.com account.

Estimated reading time: 4 minutes

With AI generating code and staff using unapproved tools, organizations risk hidden vulnerabilities and unintended data exposure.

The past two years have seen rapid AI adoption across engineering teams, but 2026 is shaping up to be the year when attackers and defenders collide at scale. 

Autonomous malware, silent supply-chain backdoors, unpredictable LLM behavior, and the surge of shadow AI inside workplaces are converging into what security leaders warn will be one of the most challenging years yet for developers.

AI-driven attacks accelerate

While some organizations push AI deeper into workflows, others are quietly reassessing.

Cybersecurity expert Jake Williams warns that enterprises are confronting the fragility of AI outputs. “The LLMs that underpin most agents and gen-AI solutions do not create consistent output, leading to unpredictable risk,” he said. “Enterprises value repeatability, yet most LLM-enabled applications are, at best, close to correct most of the time.”

He expects “more organizations [to] roll back their adoption of AI initiatives as they realize they can’t effectively mitigate risks, particularly those that introduce regulatory exposure.” Some will re-scope projects; others will abandon them entirely.

Attackers, meanwhile, are accelerating. Dmitry Volkov, CEO at Group-IB, says autonomous malware is no longer theoretical. “Autonomous AI agents will increasingly be capable of managing the entire kill chain: vulnerability discovery, exploitation, lateral movement, and orchestration at scale,” he said.

AI-driven worms and ransomware agents reshape the threat landscape

Volkov believes 2026 may see the first large-scale, AI-powered worm. “With the integration of AI, a concerning category of self-propagating malware is fast emerging,” he said. 

Future variants may “emulate worm-like behavior” and turn “every compromised device into an infection spreader.” Historic outbreaks like WannaCry and NotPetya demonstrated the impact of automation, but AI-powered versions will “spread faster, become adaptive to select targets, exploit targeted weaknesses, and evade detection better.”

Ransomware will also evolve. “Ransomware groups will gain an additional boost as they begin adopting AI agents to accelerate their attacks,” Volkov said. These agents will enter Ransomware as a Service (RaaS) offerings, giving even low-skilled criminals who rent ransomware tools rather than build them the ability to deploy rapid encryption, backup destruction, and automated lateral movement – reducing defenders’ reaction time even further.

Shadow AI becomes the everyday risk

Some of 2026’s most damaging breaches may begin not with elite attack groups, but with employees pasting sensitive data into tools security teams never approved. Dr. Darren Williams, founder and CEO of BlackFog, warns that “the explosive growth in AI usage represents the single greatest operational threat to organizations.”

A global KPMG and University of Melbourne survey found “48% of employees admitted uploading company data into public AI tools, and only 47% received formal AI training.” Williams also highlights the rise of “micro-AI extensions and plugins that can quietly extract or transmit data.”

As ransomware groups shift toward data theft, AI-enhanced reconnaissance will sharpen targeting. “The rapid evolution of AI provides powerful new tools for attackers to identify and exploit specific organizations and individuals,” he said.

Identity threats evolve: AI-in-the-middle attacks

Authentication systems will face new pressure in 2026 as AI is used to automate attacks that hijack user sessions after login. While today’s Adversary-in-the-Middle (AiTM) attacks still require criminals to actively run phishing infrastructure and capture credentials in real time, Volkov expects AI to take over much of that work.

“Attackers will embed AI into these frameworks to automate session hijacking and credential harvesting at scale,” he said, warning that AI-driven AiTM attacks could move faster than today’s MFA systems are designed to handle.

Misconfigured AI agents become a new class of vulnerability

The risks aren’t limited to attackers’ use of AI. Many organizations are misconfiguring their own deployments, said Melissa Ruzzi, director of AI at AppOmni. “The focus on functionality clouds proper cybersecurity due diligence,” she said. Misconfigurations may grant “too much power to one only AI, creating a major single point of failure”, she added. This possibility of a single misconfigured AI could end up acting like a super-admin across multiple systems. If it fails or is compromised, attackers inherit access to everything it touches at once.

She expects the problem to worsen. In 2026, risk will “heighten even more, stemming from excessive permissions granted to AI and a lack of instructions provided to it about how to choose and use tools.”

Backdoored code enters the supply chain through AI tools

Developers are increasingly trusting AI-generated code, sometimes too much. Volkov warns that this “heightened the risks of supply-chain attacks, where adversaries insert hard-to-detect backdoors into legitimate software and popular libraries.”, This was seen in 2025 with backdoored packages within npm and PyPI, typosquatted dependencies that slip into builds unnoticed, and insecure code patterns copied wholesale from public repositories.

With the rise of AI coding tools, nation-state actors may attempt “to influence or manipulate AI code-writing assistance to embed backdoors and vulnerabilities at scale” While there’s no public evidence of a nation-state successfully subverting a major AI coding assistant, the tactic mirrors well-established supply chain attacks and training-data poisoning techniques.

Preparing for 2026

The most serious cybersecurity risks of 2026 won’t hinge on futuristic artificial intelligence breakthroughs, but on everyday development realities: unreviewed AI-generated code, unsanctioned AI tools, misconfigured agents, and attackers automating everything humans used to slow down.

Developers will be central to this shift. Securing systems built with AI, reviewing code written by AI, and defending against AI-powered attackers. In 2026, the speed of software development and the speed of cyberattacks may finally meet.