London

June 2–3, 2026

New York

September 15–16, 2026

Berlin

November 9–10, 2026

What to do when there’s too much code to review

AI coding tools and agents are pushing organizations to produce more code than ever, but with mostly the same number of humans to review. Something’s got to give.

Moderated by Dalia Havens

Speakers: Pete Hodgson Hasit Mistry James Garrett

May 12, 2026

On demand video

Login or join LeadDev.com to view this content

Create an account to access our free engineering leadership content, free online events and to receive our weekly email newsletter. We will also keep you up to date with LeadDev events.

Register with google

We have linked your account and just need a few more details to complete your registration:

Terms and conditions

 

 

Enter your email address to reset your password.

 

A link has been emailed to you - check your inbox.



Don't have an account? Click here to register

More code, more problems. While AI coding tools and agents have gotten very good at producing code, teams are sticking to the same review patterns, even as those reviews get bigger and noisier. That’s one hell of a bottleneck.

LeadDevs’ own research recently found that 57% of organizations still rely on “human-in-the-loop” review for every line of AI-generated code, and 29% are spending more time on code review than before.

Can AI itself help here? And if so, how do you work with LLMs in a way tailored to your organizations’ specific processes and risk appetite?

Watch this expert panel along with our partner CodeRabbit, where we’ll discuss: 

  • How code review is evolving in the age of AI
  • Where to set guardrails when there is too much code to review
  • Tactics for maintaining code quality, security, and organizational standards as AI code volumes increase

Promoted Partner Content