London

June 2–3, 2026

New York

September 15–16, 2026

Berlin

November 9–10, 2026

AI has us asking, does (team) size still matter?

From two pizza to two people teams.
April 09, 2026

You have 1 article left to read this month before you need to register a free LeadDev.com account.

Estimated reading time: 10 minutes

As AI-powered agents enter the workforce, the shape and size of engineering teams is going to have to change.

Key takeaways:

  • The bottleneck is shifting from writing code to making decisions. Teams must unlearn processes designed for expensive, slow development and adapt to AI-driven speed.
  • Rigid hierarchies are being replaced by small “swarms” – often one developer and one PM – where humans focus on architectural judgment and business intent, while agents handle delivery.

What would your engineering team look like if you had unlimited Claude Code tokens? Should you invite your agents to the daily stand up? Would fewer human engineers be left standing? These are just some of the questions engineering leaders are faced with as agentic coding tools become a permanent part of their teams. 

After all, agile software development, DevOps, Team Topologies, site reliability engineering (SRE), test-driven development (TDD), and platform engineering were all created with human-led software development in mind. Suddenly the tech industry is not limited by human capacity, but by the coordination capabilities of these humans-plus-AI squads.

“The hard part of AI transformation isn’t the tools. It’s everything you have to examine, unlearn, and redesign before the tools land,” said Corey Latislaw, head of groceries and new verticals at Just Eat Takeaway.com. “Test Driven Development, sprint ceremonies, decide to rebuild risk-averse team structures – these are all rational responses to a world where code was expensive, experimentation was slow, and getting things wrong had high recovery cost. The danger isn’t adopting AI. It’s adopting AI and running it inside processes designed for a different economic reality.”

When we can build almost anything fast and cheap, technical decisions will come down to what’s best for business.

“AI done well can reduce the depth of skills needed to build products but it does not create added value or differentiation without knowledge-based, exploratory, and quick human decisions,” said Team Topologies coauthor Manuel Pais.

Creation is now limited only by the cost of compute, not capacity, the authors of the recently published Outcome Engineering Manifesto assert. “No more passively moving tasks onto backlogs rather than truly exploring the merits.” 

The shape of work is in rapid evolution, but the direction it’s headed is clear: human-AI hybrid teams are expected to deliver more business value faster than ever.

Team size evolves

No one knows what programming will look like in two years. But rigid teams will not be fast enough to respond to those changes.

The Jeff Bezos coined two-pizza team is looking increasingly outdated as we work out new ways of working. People, processes, and systems will need to be rearchitected to keep these human/agentic teams moving in the right direction.

Conway’s Law – founded on the belief that systems mirror the communication structure of an organization – doesn’t consider what happens when machines are not only writing software, but making autonomous decisions across the stack, at a scale we’ve yet to see. Reverse Conway’s Law or the Inverse Conway Maneuver flips that by intentionally structuring teams and communication to match the desired software architecture – usually loosely coupled components like microservices. Team Topologies embraced this reversal at scale. 

“Once enterprises ingrain the use of AI in a governed and enabling way, we should expect to see not the miracle two-person team that can do everything but rather a broader range of product team sizes and composition based on what differentiates the product(s) they are growing. Is it outstanding usability? Or its simplicity and reliability? Or something else?” Pais said.

Of course then the question becomes how to reach that well-governed and enabling AI usage.

“The enabling and platform patterns from Team Topologies are still fundamental, although the nature of the work can change. Enabling becomes more direction and governance – like guardrails for agentic squads – embedded in platforms that reduce the drift that ungoverned AI can lead to,” he continued.

The next step in agentic topologies will likely include software being autonomously built and managed by agents. If so, then what is the human developer’s role in this future?

Teams of teams will change too

It might not be about the team at all, but, just like the power isn’t in one AI agent but a fleet of them, it might come down to how you apply systems thinking to coordinate a complex mix of differently sized, shaped, and purposed AI-empowered teams.

The Godfather of DevOps, Patrick Debois, has spent the last couple years postulating what’s going to happen to the software team – or teams. 

“Smaller teams, a few people with some AI helping us to do that job, and then maybe we bring all the small teams together again in one big team,” he said. “All the smaller things, they need an identity, they need the group, and they need to share.”

Those smaller teams will almost certainly work at a different pace too. “When we have those smaller teams and the AIs helping us do faster things, we see maybe people have three-weeks sprints, maybe it’s two-week sprints, maybe it’s one-day, because [they] just asked the AI to work overnight and gather the results,” Debois said at AI for the Rest of Us. “There’s a pressure of getting things there faster. I don’t know what the new process will be, how we put all things together. I do know, from experience, if you release too many features to customers, they get overwhelmed.”

For Just Eat’s Latislaw, her teams have “radically repaired” into: one product manager + one developer + agent swarms. “The PM holds intent and outcomes. The developer holds architecture and judgment. The agents do the delivery. A team configured this way can move faster – because the constraint has moved. You don’t need ten people to build, you need the right people to decide and direct.”

That is just one team formation that brings developers closer to users. 

Right now, “a lot of development teams don’t really speak to their users very much because the product manager does that on their behalf,” said Hannah Foxwell, co-founder at BIMP, at QCon London. “For a development team, they have a lot of feedback loops, but it’s usually about how the system is working. It’s not [about] a user expressing their pain.”

In most organizations, the PMs have to filter down the many, many user needs, which, in her time working on product, had Foxwell prioritizing ruthlessly “because developer productivity requires focus,” which is why she organized teams so they could focus on one thing at a time.

Now agentic coding potentially increases the possibility of 10x teams working on 10x or even 100x new features.

“There isn’t enough work in the funnel for some of these teams to keep those engineers who have unlocked this new velocity busy,” she said. But the alternative of a PM approving all requests leads to a worse product. 

AI is already good at rapid prototyping. Foxwell advocates for two PM + one dev trios. This way you “get this feedback loop between your users, product, and development, so that everything that goes into the dev team is well-qualified work, well-understood, you have tested it with your users. You haven’t just put it into your product and hoped for the best.”

A different team shape with a similar user-oriented focus sees the rise in popularity of the forward-deployed engineer, where AI tools are so portable and ready to prototype that devs embed with their customers to clarify needs up front within their individual organizational context and data. 

“This is not necessarily an implementation or a professional services engineer. This is an empowered engineer that’s going to shorten that feedback loop as much as possible,” Foxwell said. “You end up with a roadmap of long-term features and a very, very short feedback loop of small, tactical changes that are going to delight your customers.”

On the other hand, she continued, some companies are just hiring their core users or customers, inner-sourcing that crucial product empathy.

What’s got to change

Despite the claims of some CEOs recently, the current crop of agents are not a like-for-like replacement for engineering talent.

For now, agents like Claude and Cursor are best served tackling well-defined, confined tasks, while human-led work is needed to address the overarching complexity of competing business and technical demands. 

Multitudes asked engineering leaders if they’ve made any changes so that people and/or codebases work better with AI tooling. About half had done so. Not surprisingly, considering how many orgs went all-in on generated code, 67% cited changes related to the code, while relatively few cited changes to people or processes. 

The most common process changes cited by the survey respondents was around the need for some form of platform engineering. “People mentioned spinning up AI platform teams – or having AI engineers to lead practices,” said founder and CEO of Multitudes Lauren Peate said.

When changes centered on engineers, she continued, comments most often referenced the merging of roles, illustrated by one response: “We have bumped the developers up to product engineers, giving them more end-to-end responsibilities. With AI managing boilerplate, refactoring, and code review, they have more time to deliver features, and it makes sense to give them the ownership and tools to do so.”

Engineering leadership will certainly be reconsidering not only how teams work with AI, but how those roles change in this coordination and collaboration with AI.

You could even try putting AI into your hierarchy, suggested Martin Reynolds, field CTO at Harness.

“What’s emerging is a new operating model where AI sits alongside teams, not outside them. When you formalize that, you gain something incredibly valuable: a structured system of permissions, controls, and escalation that mirrors how we already manage people,” he said. 

“That’s the breakthrough – governance isn’t something you layer on top of AI, it’s something you design into its role from Day One.”

Limitations of communication

In the face of near limitless software, what is currently holding us back? The ability to make business decisions at AI-speed, for one.

“If software is getting built very fast, and the bottleneck just becomes that ability to make decisions, do we need to change decision structures? Do we need as many middle managers in the system?” Thoughtworks CTO Rachel Laycock asks. “If we can leverage AI to comprehend what’s going on, tell us what’s going on, do actions for us, then that comes back to that problem of ending up with managers that are completely overloaded with decisions.” 

This is empathized in the Outcome Engineering Manifesto (O16g for short), which states that agentic AI should not be an excuse to replace jobs. It should be the impetus to finally shift the engineers’ focus from output, like the code, to outcomes grounded in measurable business results. 

“Here’s the fundamental problem, if you put people in a room but they don’t know what the system is, they are unlikely to reach any meaningful decision or they’ll reach the wrong decision,” said Tudor Girba, co-founder of another movement Moldable Development, which argues the biggest cost in software development is spent on trying to figure out complex, distributed systems. 

High-performing organizations also look at the communication between teams, but the missing piece is always the conversation between teams and systems, to the point, he says, he doesn’t trust an engineer’s interpretation or guess on how many services are running at a given time.

“We have a lot of beliefs, like architectural diagrams, [which are] rarely generated directly from the system and they’re normally something that somebody created because, at a moment in time, it was their belief of what the system is,” added Simon Wardley, the other co-founder and creator of Wardley Mapping. This has left developers reading 100 million lines of code to try to understand systems. Which will only get exponentially insurmountable in the face of AI.

Tech must remain about its (human) creators

Organizations have to stop looking at where they can fit AI into everyone’s work. Instead they should start figuring out how to center their people in their AI strategy.

If you ask Zero Vector founder Erika Flowers, AI won’t change the product pipeline problem. “Now the question is not how to build a better multi-stage rocket, not how to optimize the handoffs, improve the sprint cadence, or produce cleaner translation documents between design and development,” she argues, as an ex-NASA AI innovation lead. “Those are all answers to a question that stopped mattering. The question is: why are you still launching from the ground when orbit is available?”

AI acts as a high-efficiency propellant that renders traditional roles in traditional hierarchies performing traditional handoffs moot, Flowers says, allowing creators to move directly from intent to execution. No matter how you shape the teams or processes, everything should be focused on enhancing problem-solving skills and understanding user needs.

Former distinguished engineer at Google, Kelsey Hightower, put it best: “I don’t care about LLMs or AI. I care about people. AI is just a tool to me. Tools enable people to help or harm at a scale beyond their natural abilities. I care about how people treat each other using these tools.”

LDX3 London 2026 agenda is live - See who is in the lineup