New York

October 15–17, 2025

Berlin

November 3–4, 2025

London

June 2–3, 2026

The rise – and looming fall – of acceptance rate

It has become the de facto metric for measuring the effectiveness of AI coding assistants, but is it fit for purpose?
July 10, 2025

You have 1 article left to read this month before you need to register a free LeadDev.com account.

Estimated reading time: 6 minutes

Ever since AI coding assistants entered the picture, acceptance rate has sat atop many engineering leader’s AI code-tracking dashboards.

But the metric – which indicates how often a developer implements a code suggestion given by tools like GitHub Copilot, Cursor, Windsurf, and Claude Code – may be on its way out, or at least becoming significantly less top-of-mind. 

“Early on, we were really trying to get a sense of whether or not these tools were able to solve real problems,” said Laura Tacho, CTO at developer intelligence platform DX. “Acceptance rate gives us a signal on that: ‘Are developers actually accepting the suggestions of code assistants?’ If not, we know the tool might not be very useful. Now that we’ve established that these tools are useful, acceptance rate is less important.”

The rise of acceptance rate

The release of GitHub Copilot in 2021 unlocked developer appetite for AI coding assistants. By 2024, they were a widespread norm, reshaping developer workflows and even how companies hire developers. The sharp shift naturally drove interest in using metrics to understand the impact, with engineering leaders first-and-foremost wanting to know if developers are actually using the tools. 

“It’s a measure of engagement. And engagement is something you really want to know. You’re paying money for these tools. So how many people engage with it?” said Sabrina Farmer, CTO of GitLab. 

Yonatan Arbel, developer advocate at software supply chain company JFrog, said traditional metrics like lines of code, velocity, and PR count don’t capture the nuance of AI assistance. Acceptance rate, however, is an intuitive and accessible proxy for how often AI-generated suggestions are deemed useful by developers.

AI coding assistants suggest lines of code, and acceptance rate shows on a basic level if the code – and thus the tool – is being used. Put simply, acceptance rate emerged because it was a simple signal and, honestly, the best one available.

“It’s easy to instrument, and in early adoption phases, any signal is better than none,” he said.

With so many AI coding assistants now available on the market, Farmer said that acceptance rate additionally provided a way to commonly evaluate and compare different offerings – the idea being that if code from one tool is getting accepted significantly more often, it may be a better tool, or at least better for that specific use. 

“You look for a measure that is comparable. Apples to apples as opposed to apples to oranges. And acceptance rate today is that measure,” she said. This doesn’t come without issues however, because it depends on the developer’s choice, which is inherently biased and doesn’t necessarily always reflect the effectiveness of the tool. 

Blind spots and drawbacks

The issues with acceptance rate as a metric stretch far beyond potential biases when comparing tools. Despite its fast rise and widespread use, many engineering leaders and developers actually have a lot of qualms with the metric.

First, there’s the issue of quality versus quantity. Accepting code doesn’t mean it’s good code, and creating more code faster doesn’t pose any benefits. It could also cause larger problems later on if it contains errors or causes issues with deployments. Acceptance rate also doesn’t capture when an AI suggestion was rejected because it wasn’t helpful, as opposed to when it may have helped a developer think through code but they didn’t use the suggestion directly, Arbel said.

“Without context around why something was accepted, how it affected the outcome, or how it changed the developer’s thought process, it risks being more about tracking usage than impact,” he said, pointing to how this becomes an issue as usage of the tools matures.

For example, Farmer said acceptance rate is only the entry point into your code base, and if you don’t monitor the evolution of that code over time and what ends up in production, you’re not really getting a clear picture of what matters. Overall, usage doesn’t say much about impact. This is why some engineering leaders have called acceptance rate a “vanity metric” and approached the metric with caution, even while keeping a close eye on it. 

With some engineering leaders tracking and even setting mandates around developer usage of AI (and sometimes, around acceptance rate specifically), focus on acceptance rate as a metric also drives the risk that developers might accept suggestions just to keep metrics high.

“If emphasized too much, it could turn coding into a game of compliance: ‘accept more to show I’m productive’, which can erode trust,” said Arbel. “If developers feel pressured to accept more AI suggestions just to look efficient, they might stop exploring alternate solutions or optimizing for elegance, which will erode creativity. We need to remember that coding is still a craft, and doing that can atrophy the developer’s thinking in the long run.”

Acceptance rate takes a back seat

Acceptance rate may have caught on quickly, but it doesn’t mean it’s here to stay. Tacho said that because acceptance rate offers limited insight into the capability of these tools and “extremely limited insight into their value,” she strongly advises against using it as a key productivity signal. 

As engineering leaders increasingly get a better picture of adoption and the use of AI coding tools become further solidified in developers’ workflow, measurement needs and preferences will shift.

“As these tools have matured, so have the metrics used to measure their impact. Acceptance rate shouldn’t be front and center anymore,” Tacho said. 

JFrog’s Arbel echoed the sentiment, saying it will become more like step count in fitness because while it’s helpful, it doesn’t give the full picture. “I do believe that over time, it will lose its prominence,” he said. 

Lena Reinhard on stage at LeadDev New York 2023

A fuller picture

GitLab’s Farmer doesn’t quite think acceptance rate will go away, but that there’ll be more numbers associated with it. Already, engineering leaders are working to understand how to get a much fuller picture beyond acceptance rate.

This includes other metrics associated with AI coding tools like how long AI-generated code stays in the codebase and if it contributes to specific features, combined with traditional throughput metrics like on-time delivery, and insights compiled through qualitative self-assessments.

Tacho said previous software metrics absolutely still apply for code authored with AI-assistance, “and they’re more important than ever before.” 

“A well-balanced metrics framework will help you see mid- and longer-term impact of AI, which is critical to avoid tunnel vision,” she said. “On top of these metrics as a strong foundation, it’s good to have some AI-specific metrics, like time saved per developer. That makes it possible to see a more complete picture and to really understand the specifics around AI usage so you can improve the support and processes around it.”

Tacho recently co-authored an AI Measurement Framework to give organizations clear guidance on what to measure. She suggests measurements across three dimensions: utilization, impact, and cost. Some recommended metrics including weekly active users, time saved per developer per week, developer satisfaction with AI tools, and total AI budget. Acceptance rate is not included in the framework.

For some other metrics, Arbel recommends tracking time to complete tasks, error rate or rework frequency post-acceptance, code review feedback (did reviewers notice more bugs or better structure?), and the size of tasks versus time spent (are small changes taking less effort?).

The other important factor is not to lose sight of the bigger picture, particularly the business value and end goals you’re trying to achieve. For this reason, it’s vital to think about the team as a whole. 

“You want to actually improve the function of the team as opposed to the individual. I think sometimes when people say, ‘I’ve saved this many hours,’ like two hours of a developer’s time a day, that’s great. But you don’t know what that individual is going to choose to do at that time,” Farmer said. “It’s far more valuable to think about the team as a whole.”