New York

October 15–17, 2025

Berlin

November 3–4, 2025

London

June 2–3, 2026

Are you paying the AI competence penalty?

Mitigating bias against engineers who use AI coding assistants
August 07, 2025

You have 1 article left to read this month before you need to register a free LeadDev.com account.

Estimated reading time: 3 minutes

Devs who use AI are rated 9% less competent on average than their peers, a new study suggests. And the competency penalty is harsher for women.

AI use in coding fields is near-ubiquitous: 98% of respondents to LeadDev’s AI Impact Report 2025 say they have either adopted or are exploring the use of AI tools, and 59% of them say AI has increased their productivity.

But that AI adoption comes at a cost among your peers, according to a new study published in Harvard Business Review. The experiment gave more than 1,000 engineers identical snippets of Python code to evaluate, and were told that it was either written by another engineer alone, or another engineer with AI assistance.

While the code didn’t differ, people’s perception of its quality did depending on that context. Code purportedly written by engineers with the help of AI was rated 9% worse on average. When told the gender of the coder using AI assistance, engineers rated their colleagues using AI as 6% less competent if they were male, and 13% less competent if female. That number rose for male engineers who didn’t use AI assistance themselves, who rated female engineers who did as 26% less competent. 

The researchers have dubbed this the AI competence penalty, where using tools designed to help are regularly viewed as a shortcut and evidence of a lack of competence by colleagues, especially if they are female.

The results aren’t “unreasonable” to Tamay Besiroglu, co-founder and CEO of Mechanize. “If an engineer uses AI, there’s a greater risk they don’t fully understand the code and might miss basic issues,” he says. “Models aren’t great yet, especially for large codebases or features where precision matters.”

Besiroglu points out that he would rate a colleague or co-worker more highly if they produced an elegant proof for a maths problem themselves, rather than using Gemini 2.5 Pro or another AI model – “since almost anyone can do that,” he says.

Inherent biases

“The extra competence penalty for women who used AI is striking,” says Amy Diehl, chief information officer at Wilson College, and author of Glass Walls. “It is another case of women being perceived as ‘never quite right’.  If women use AI, they are perceived to have lower competence. If they choose not to use AI, they are considered to not be competent.” 

One major issue with the research, identified by David Krueger, assistant professor in AI at the University of Montreal, is that the team behind it – who did not respond to LeadDev’s interview request – see AI as an unvarnished good, thus are surprised by the results. 

“It’s interesting that they seem to assume adoption is good and not to consider legitimate reservations engineers may have had about using AI,” says Krueger. 

There could be a number of different reasons why an engineer might not want to use AI. “Maybe the tool just isn’t actually that good; engineers may be worried about low-probability high-impact mistakes. They may be concerned that their skills will atrophy. They may anticipate that automation may threaten their livelihood or be pushed to a point where meaningful human oversight is lost,” says Krueger.

Avoiding the AI competency penalty

Fixing the issue requires awareness of the problem, and deliberate thinking by managers. 

The team behind the research put forward three proposals for those in charge to consider: 

  1. Audit teams to understand where the competence penalty might hit the hardest, then actively encourage them to adopt the tools that they might otherwise not test for fear of being criticised.
  2. Improve AI adoption across your organisation. Those who were more likely to look down on people who used AI were those who weren’t using it. Knowing the tech, and having influencers within teams using it builds psychological safety for others. 
  3. Removing the label of ‘AI-assisted’. The code the researchers asked participants to evaluate didn’t change; perceptions did. 

“To mitigate the competence penalty for women, use the ‘flip it to test it’ approach,” says Diehl. “Ask yourself, if a man provided the same work, how would it affect perceptions of his competence? For example, if John used AI to write his code, would you perceive his skills to be inadequate? And be sure to train everyone on the concept of the competence penalty and how to mitigate it.”