You have 1 article left to read this month before you need to register a free LeadDev.com account.
Estimated reading time: 5 minutes
As agentic IDEs and AI coding assistants become normalized, the practice of parallel coding – simultaneously programming alongside multiple AI agents – is gaining ground.
Anthropic’s blog post announcing a web version of its Claude Code platform touted “parallel development work” as a key selling point, and others are getting in on the action.
Despite lingering skepticism from some industry veterans, multiple engineering leaders tell LeadDev that parallel coding is catching on. Some suggest that the rise of IDEs that incorporate AI-enabled features, such as JetBrains IDEs and Eclipse, might even redefine the discipline of software development altogether, challenging the fundamentals of an entire professional identity.
‘Acceleration with oversight’
It can be difficult to distinguish between real momentum and hype, especially where it comes to AI adoption. But Sonu Kapoor, a nearly 25-year veteran developer and senior Angular consultant based in the Toronto area, is adamant that AI-assisted parallel coding is “absolutely” becoming part of how a growing proportion of developers do their jobs.
Your inbox, upgraded.
Receive weekly engineering insights to level up your leadership approach.
While he believes that the perceived productivity gains tend to outpace measurable ones, “both are real” – an observation that may be supported by a new Faros AI study, which found that AI coding assistants boosted developers’ output, but not company productivity. The productivity boost from AI-assisted coding quickly fades when code reviews pile up, tests break easily, or release pipelines can’t keep pace with the faster development cycle, according to the authors of the report.
For engineering teams, Kapoor says the bigger adjustment isn’t in using the tools themselves but getting used to how the tools change their workflows.
Instead of the traditional ticket-to-PR model, parallel coding requires a more dynamic approach where AI agents generate multiple parallel drafts of tests, migrations, documentation, and UI variations, while human engineers take on the role of orchestrators who curate, integrate, and validate the strongest outcomes. “The result is less about automation and more about acceleration with oversight,” Kapoor says. It’s a reality where “agents draft and humans decide.”
Moderne founder and CTO Olga Kundzich argues that the term “parallel coding” is a misnomer, as AI-assisted coding is never truly autonomous in the first place. “Working with coding agents requires developers to spend time upfront designing the solution together with the agent – planning – and then periodically answering questions from the agents to guide them forward, as well as examining their work at the end,” says Kundzich, whose company offers a platform for automated code refactoring and analysis. “A developer can manage a couple of parallel sessions at most.”
Though Kundzich acknowledges that AI coding assistants can help lower the “activation energy” it takes to get particularly tedious software projects off the ground, she contends that they are no match for human creativity and focus. “If a task requires you to take a pen and paper and really clearly think about the solution, it’s unlikely that a model or agent will be able to help,” Kundzich says. “It will get confused, generate a lot of code, and come to a standstill.”
It may be telling that the Faros AI report affirming the sped-up output of AI-assisted teams found that, for those teams, PR review time increased by 91%. In other words, AI oversight introduces a new software-development bottleneck while taking over a function – writing code – that was never really a bottleneck in the first place.
More like this
AI oversight becomes even more of a potential bottleneck when a developer is running multiple agents at once. “The challenge with parallel coding is that you have to make sure all the agents are in sync, working together like a team, not trying to build on their own,” says Ravitez Dondeti, an engineering manager at Crestron Electronics based in Plano, Texas.
But Dondeti posits that this challenge is easily addressed with the right agent management system. On the side, he has been working on an open-source prompt-engineering framework that would facilitate sub-agent coordination according to a team’s specifications, comparable to automated workflow tools such as LangChain, CrewAI, and Autogen.
Dondeti believes that multi-agent orchestration will soon unseat coding as the central component of software engineering workflows. However, he is confident that the creativity and problem-solving acumen that make for a strong developer will still hold.
The difference is that, in this new paradigm, creativity means knowing how to command and fine-tune AI, not just writing elegant syntax. “It’s about how you partner with the AI tools, how you can orchestrate them, and how you can command them and fine-tune them to what you’re looking for,” Dondeti says.
The managerial paradigm shift
It stands to reason that an evolution from the archetypal developer writing code in their private digital sandbox, to working as an orchestrator of AI agents will call for a shift in leadership approaches.
“Managers need to consider how to enable developers to work effectively with AI agents and facilitate knowledge-sharing as agents, and the techniques for working with them, evolve,” says Kundzich.
At Moderne, Kundzich’s developer teams were gradually onboarded to Claude Code and given best-practices guidelines on what kinds of tasks to assign to agents and what potential hiccups to watch out for. For example, Moderne’s engineers are encouraged to use AI tools to generate unit tests, which means also reviewing and refining the output until it’s right. Although the need to correct and iterate from the AI’s work ultimately adds time to the overall process, it reduces the mental friction that often keeps tests from getting done.
Managers also need to introduce practices that preserve reasoning and accountability in their teams. Kapoor encourages leaders and their teams to incorporate what he calls AI design notes: short write-ups for documenting problems, prompts, and the rationale for chosen solutions. Kapoor also sees value in appointing specific team members to serve as “agent wranglers” tasked with maintaining prompt quality and monitoring regressions.

November 3 & 4, 2025
Don’t miss the chance to join your peers at LeadDev Berlin!
Although software engineering processes, team members’ roles, and tooling may evolve, the job’s basic objectives are the same as ever. “AI makes it easy to appear busy, but true impact still comes from shipping reliable, thoughtful software,” Kapoor says. With ballooning AI hype and correspondingly high stakes, leaders must take care not to lose sight of the big picture.