New York

October 15–17, 2025

Berlin

November 3–4, 2025

London

June 2–3, 2026

3 data-backed ideas to deter AI code quality problems

Learn what 250 million+ lines of code reveal about AI's impact on code quality and how to address it.

Speakers: Bill Harding

Register or log in to access this video

Create an account to access our free engineering leadership content, free online events and to receive our weekly email newsletter. We will also keep you up to date with LeadDev events.

Register with google

We have linked your account and just need a few more details to complete your registration:

Terms and conditions

 

 

Enter your email address to reset your password.

 

A link has been emailed to you - check your inbox.



Don't have an account? Click here to register
October 30, 2025

Drawing from recent research of 250m+ lines of code: three risks of AI-authored code, with an idea or policy to mitigate each.

As AI continues to reduce the proportion of code guided by human experience, what is lost? This talk examines industry-scale code line data (including repos from Google, Microsoft, Meta, and 1,000+ others) to enumerate three main areas where AI-authored code bends toward incubating tech debt.

While the benefits of AI accumulate in the daylight of well-funded industry reports, the maintainability challenges of AI fester in dark corners. To illuminate how AI-authored code differs from its human counterpart, we take five years of “code operation” trends, and combine it with newly-released AI usage APIs.

The story emerging from the data suggests that historic “best practices” (e.g., DRY code) have loosened. The magnitude of this change appears unprecedented. Effective policies can mitigate the downside risk. Engineering leaders can sharpen their intuition for how to build smart policies by leveraging recently available data.

Key takeaways

  • The scale of code quality changes are measurable and unprecedented
  • There is learnable consistency in how AI can poison code quality
  • Even though we often read about profound increases in code velocity, the incentives of the companies building the LLMs remain undocumented and likely imperfect
  • Higher velocity carries new costs: better to understand them than to spike adoption without regard for long-term detriment