London

June 2–3, 2026

New York

September 15–16, 2026

Berlin

November 9–10, 2026

Axios hack exposes AI-coding’s dependency problem

AI’s overreliance on dependencies is risky business.
April 02, 2026

You have 1 article left to read this month before you need to register a free LeadDev.com account.

Estimated reading time: 3 minutes

Key takeaways:

  • AI-coding increases hidden risk. Developers inherit complex, dependency-heavy code they don’t fully understand.
  • Supply chain attacks are spreading: one compromised package can impact thousands of projects and leak sensitive data.
  • Defenses lag behind. Without stronger guardrails and scrutiny, breaches are likely to recur.

It’s been a bad week for AI-coding tools. Hackers have compromised the popular JavaScript library Axios by breaching its npm account, injecting malicious code into a new release downloaded millions of times before being pulled.

It comes just days after a similar incident involving LiteLLM’s PyPl package, which ended up delivering a credential stealer into any projects it was used in.

The Axios attack gave intruders the ability to harvest sensitive developer data and potentially access downstream systems that relied on the package. 

Axios is one of the most widely used open-source modules in the JavaScript and AI development ecosystems. The breach affected thousands of projects, opening up debate about the fragility of modern software supply chains, particularly as AI-coding tools detach developers from what they’re shipping.

It’s easy to lay the blame at the door of the project, but it would be incorrect to do so, said Justin Cappos, professor at New York University’s Tandon School of Engineering.

“This is an extremely sophisticated attack put on by nation-state-level actors that seems to have a financial motivation,” he said. Cappos pointed out that “this is the kind of issue most projects could have fallen for. It’s very difficult to protect against.”

What’s going wrong?

The breach underscores how deeply embedded open-source dependencies have become in the AI development process – and how much risk that creates.

“Before AI, the developer role was clearly defined,” said Bob Huber, chief security officer at Tenable. “With the advent of AI, everyone is a developer, or vibe coder, and they aren’t trained in proper development, security standards, and best practices, which exposes organizations to greater risk.”

That worries Huber. “We are accelerating development velocity while simultaneously losing visibility into our software supply chain,” he said.

Part of the problem are the ‘kitchen sink’ tendencies of AI-coding tools to overengineer solutions, along with their reliance on dependencies that run smoothly when they’re working, but open you up to issues when they don’t.

“When you install and configure AI components, it installs all kinds of required dependencies, and the vibe coders in your organization have no idea what those things are, what they do, or how to secure them,” said Huber.

 AI-coding’s major dependency problem

“AI does seem to add dependencies that aren’t needed and dependencies are at risk,” said Cappos. It seems to do so for convenience sake, but it can also come at a cost, as evidenced in this case.

Huber doesn’t forswear dependencies altogether, nor does he begrudge AI-coding tools’ reliance on them. “I don’t discourage using npm packages or open-source software because they’re invaluable tools that drive innovation,” he said.

However, there needs to be a rethink in how AI-coding tools are used, and how dependency-heavy packages like npm are integrated into day-to-day live code.

“To maintain trust in these systems, we must exercise caution and implement rigorous security guardrails to prevent the installation of malicious code,” Huber said.

That’s because, on the other side of the equation, bad actors have recognized the power they can wield by compromising packages like npm.

“I think attackers have wised up to the fact that if they can get into some of these important packages, you get hundreds of millions of downloads, potentially, of a piece of software,” said Cappos.

More work is needed to fund the projects that are used for free by millions of people, as well as to train devs to check twice, and deploy once. As Cappos explained: “there’s nothing fundamentally that we’re doing as defenders to prevent this happening again.”