You have 1 article left to read this month before you need to register a free LeadDev.com account.
Estimated reading time: 5 minutes
When something doesn’t give your team the results you expected, we often blame the technology.
However, when it comes to AI-powered developer tools, the problem might not be the AI itself, but your team’s communication structure.
Here’s what’s actually happening: AI amplifies whatever organizational dynamics already exist. If collaboration is already strong, it gets stronger and increases velocity. However, without clear ownership on the process, AI would just cause more confusion on who should act and monitor the process.
Having participated in scaling engineering teams from zero to hundreds of engineers at the startups Xendit and Spenmo, here’s what I’ve learned about setting engineering teams up for success, and how AI changes everything.
Find the real blockers
It’s all too common to face well-meaning execs who suggest adding AI into workflows without much planning. While the hype around AI tools heightens the need to look for quick paths to adoption, it’s better long-term to think about where it would truly move the needle. Most of the time, you can find the answer by thinking about where your teams get stuck most.
Consider a few possible scenarios. Your QA team might often feel overwhelmed by endless regression tests during every sprint. AI could help generate edge cases and find anomalies. However, this works best when QA and engineering teams share both communication channels and tools. If these teams are separated, useful insights may be lost in a Jira ticket that nobody checks.
Alternatively, you might find yourself in a situation where engineers are now coding faster thanks to assistants like Copilot or Cursor. But who’s making sure the AI-generated code isn’t flawed? Who’s reviewing it? Who fixes it when the AI ultimately breaks something?
If the communication process between QA team, product managers, and engineers hasn’t been established, AI might just simply amplify each of them to test various irrelevant user journeys, further distracting them from what truly matters to customers.
Your inbox, upgraded.
Receive weekly engineering insights to level up your leadership approach.
Conway’s Law and the Reverse Conway Method
Conway’s Law says your product ends up looking like your org chart. If your teams don’t talk to each other and work separately, you’ll build systems that are just as fragmented.
The reverse Conway method turns this around. Start by deciding how the product should work as a whole and what kind of system you actually want to build. Then shape your teams around the product domains (e.g. invoice, billing, payments products) instead of one based on engineering functions (backend vs frontend vs devops team).
Here’s where AI comes in. In a reverse Conway setup, team ownership and product domain boundaries are defined upfront. When the scope, interfaces, and “definition of done” (e.g. “automate invoice matching with 99% accuracy”) are explicit, tools like Cursor agents or Claude skills or Cursor agents can operate inside a bounded system prompt, without heavy context engineering.
An MIT Sloan study also found that highly skilled workers saw ~40% productivity gains when AI was applied to well-defined tasks. Reverse Conway creates exactly those conditions: clear ownership, clear interfaces, and clear outcomes, therefore AI would compound that clarity by increasing the velocity.
Organize around ownership, not org charts
Using the reverse Conway method, organize teams around what they’re shipping, not what their job titles say. Separate teams for data engineering, backend, and product means everyone chases different goals. Add a non-deterministic AI agent to that mix and nobody knows who’s actually owning the outcome of a product domain.
Say for example, a data engineering team doesn’t have a domain for the invoice checkout product and end up modifying fields without knowing the outcome. Adding AI to the mix here would compound issues, not solve them.
Instead, build cross-functional teams organized around specific product outcomes. For example, if you’re working on your invoice checkout flow, create a dedicated team with backend developers, ML engineers, and product managers all focused on the same goal: conversion rates. They share the same metrics dashboard, join the same standups, and rotate through the same on-call schedule. When conversion rates tank, they all jump in to fix it. When rates climb, they all celebrate. And if the product evolves, reorganize the teams around the new outcomes.
That’s how you get real ownership instead of playing the blame-game when things break.
More like this
Clarify the roles
AI needs a clear role in your organization. If you haven’t explicitly defined what AI handles versus what humans handle, you’ll get inconsistent outputs and wasted effort. One engineer ships AI-generated code as-is, while another rewrites everything from scratch.
Here is what clear role definition looks like in practice:
- Customer support teams may use AI to write the first reply. Humans handle difficult cases. The team sets accuracy thresholds, if AI drops below 85% on refund requests, those go back to humans.
- QA teams may use AI to spot unusual user behavior. The team decides together where AI can auto-file bugs versus just alerting humans.
- Development teams may use AI to write basic code. Humans make design decisions and review the code. The tech lead defines boundaries. AI can scaffold endpoints but shouldn’t touch payment processing.
When roles are explicit, teams stop debating whether to trust AI and start focusing on outcomes. This setup lets engineers focus on work that needs human judgment.
Building culture around iteration
Typically, integrating AI tools into your organization and processes cannot change a company’s culture overnight. Instead, things might start to shift once people find new ways to work together. Once the process is clear and the outcome of a product domain is clearly clarified, AI would be able to amplify the results and we can still monitor the outcome.

London • June 2 & 3, 2026
LDX3 London agenda is live! 🎉
Sometimes, it’s better to have repetitive and slow tasks automated, since that can free up time and energy for more creative thinking. If they study and fix the system’s mistakes, engineers may grow more confident and sharpen their judgment. Often, it’s curiosity, not rule-following, that leads to technical excellence.
Final thoughts
AI won’t fix broken team structures. It will expose them. Before blaming the technology, look at how your teams communicate, who owns what, and whether your organization is built around outcomes or org charts. Get those fundamentals right, and AI becomes the accelerator you hoped it would be.