You have 1 article left to read this month before you need to register a free LeadDev.com account.
Estimated reading time: 5 minutes
Those getting the most from AI coding tools were top performers all along, according to a new piece of research from GitClear.
AI coding is now commonplace. In a study with Accenture, Github reports that 90% of developers say that they’ve committed AI-suggested code.
But, are all developers more productive with AI? Productivity gains with AI remain elusive, with one study from METR finding perceptions of productivity gains from AI coding often outpace reality.
New data from a new GitClear AI productivity report adds another side to the story. It strongly implies that power AI users were already contributing at a higher level than their peers the year prior. In other words, AI tools don’t create talent as much as they amplify it.
This framing aligns with Google’s 2025 DORA Report, subtitled: the State of AI-Assisted Software Development, which describes that “AI’s primary role in software development is that of an amplifier.”
GitClear’s research, released in partnership with GitKraken, analyzed data from popular coding agent APIs including Claude Code, GitHub Copilot, and Cursor. The study spans roughly 30,000 datapoints of real-world developer activity across more than 2,000 developers, covering late 2024 through late 2025.
The analysis grouped developers into AI usage cohorts, ranging from non-users to power users, and compared their code output over time.
Your inbox, upgraded.
Receive weekly engineering insights to level up your leadership approach.
More AI use = more code output
The study found that power AI users make the most commits, code changes, and pull requests. They’re pulling the weight of senior engineers, whether or not they’re recognized as such.
“We see very stark differences in metrics across the board relative to how much developers are interacting with AI,” says Bill Harding, CEO of Amplenote and GitClear. “The magnitude of that effect was far beyond what I expected to see.”
Across the board, metrics rise with heavier AI use. The most notable jump is Diff Delta – GitClear’s measurement of meaningful lines changed (also referred to as durable code changes), and commit counts for regular and power users. These run roughly four to ten times higher than non-users of AI tools.

The ‘rich get richer’ effect
The weighted data from GitClear’s report, which is aggregated across thousands of qualifying developer-weeks, shows extreme separation between AI usage cohorts. Looking at that in isolation, one might assume AI alone makes coders massively productive.
The year-on-year data tells a different story. Because GitClear tracks the same developers over time, it shows that those who moved into heavier AI usage were already among the highest performers in the prior year, based on key metrics like Diff Delta and commit count.
To reach that conclusion, GitClear compared year-over-year output for individual developers across the same metrics. Median Diff Delta rose 12% for regular AI users and 81% for power AI users. Meanwhile, the developers who didn’t use AI registered a statistically insignificant change, with their output falling by around 2%.
So, while the regular daily AI users registered about four times more measurable output than the non-users, their productivity increased by only about 12% compared to their own output a year earlier. This points to selection bias: top performers increasingly concentrated in AI cohorts over the past year, inflating AI cohort averages while pulling down the non-AI baseline.
“This substantiates that the currently AI-engaged developers were already contributing significantly more durable code than the non-AI users, even before they adopted the latest AI tooling,” explains Harding. “But AI is exacerbating that difference.”
In other words, it’s not that AI users suddenly became exponentially better, it’s that developers who were already the most productive are now utilizing AI tools the most effectively.
More like this
AI multiplies senior engineers’ abilities
These findings align with a familiar understanding: senior engineers tend to produce more meaningful code. They’re also better positioned to extract value from AI tools.
Experienced developers tend to see the bigger picture, know what good code looks like, and are likely to use that experience to prompt more effectively. “The most senior developers have been the most voracious adopters,” says Harding.
In that sense, AI enforces the old 10x engineer dynamic: The developers who were already the most productive are seeing the biggest gains from the latest and greatest tools.
Side effects of increased AI use
GitClear’s data shows a productivity chasm between non-AI users and frequent AI users. But, higher commit quantity doesn’t necessarily translate into higher end value.
AI-generated code can introduce vulnerabilities and cause issues down the line. CodeRabbit’s State of AI vs Human Code Generation Report found that, on average, AI-generated PRs contain 1.7X more issues than human-written PRs. Logic correctness issues, including business logic errors, misconfigurations, and unsafe control flow, rise by 75% when using AI coding tools, and performance issues, including excessive external calls, rise by 8X.
GitClear similarly unearthed some unsavory side effects. Increased AI code output increases code duplication, review time, and churn, reinforcing concerns about long-term code quality and technical debt.
One silver lining is that these secondary effects rise in a linear fashion, even as overall code output grows much faster. The second-highest AI user cohort has a comparable increase for both ‘code output’ and ‘team member code review time.’ Interestingly, AI power users actually require marginally less review time, relative to the volume of durable code they are delivering.
“It seems that the daily AI users have, at worst, created additional code review burden in proportion to how much code they’re generating,” says Harding. He chalks it up to more experienced developers generating more code, revising it more frequently, and as a consequence, inducing less code review time among teammates.
Takeaways for engineering leaders
As adoption grows and developers see such a big output boost from AI agents, the use of AI is likely to widen performance gaps rather than close them.
GitClear’s data shows that AI coding doesn’t magically erase churn or duplication. So, it’ll take more investment into reviewing code as more code is generated more quickly.
Another takeaway is around measurement. Measuring developer productivity has always been hard to quantify, and it’s just as easy to come to false conclusions when studying AI’s impact. The GitClear study shows that if you measure AI productivity by comparing broad cohorts, like AI users to non-users, you are not measuring AI’s impact per se – you’re mostly measuring who your most effective developers are.
It’s best not to assume AI is creating new heroes out of thin air. AI fluency increasingly looks like a senior skill, and it’s best to assume it’s amplifying pre-existing talent.

London • June 2 & 3, 2026
Rands, Nicole Forsgren & Matej Pfajfar confirmed
With that in mind, leaders should not use general AI adoption as a proxy for performance. Rather, look to the traces it leaves to discover internal developer heroes, and understand what’s working well with them.