London

June 2–3, 2026

New York

September 15–16, 2026

Berlin

November 9–10, 2026

Code review isn’t getting any easier

AI coding tools and agents are pushing organizations to produce more code than ever, but with mostly the same number of humans to review. Something’s got to give.

James Garrett and Dalia Havens

Date & time

17:00

Register for the panel discussion

Login or join LeadDev.com to view this content

Create an account to access our free engineering leadership content, free online events and to receive our weekly email newsletter. We will also keep you up to date with LeadDev events.

Register with google

We have linked your account and just need a few more details to complete your registration:

Terms and conditions

 

 

Enter your email address to reset your password.

 

A link has been emailed to you - check your inbox.



Don't have an account? Click here to register

TMore code, more problems. While AI coding tools and agents have gotten very good at producing code, teams are sticking to the same review patterns, even as those reviews get bigger and noisier. That’s one hell of a bottleneck.

LeadDevs’ own research recently found that 57% of organizations still rely on “human-in-the-loop” review for every line of AI-generated code, and 29% are spending more time on code review than before.

Can AI itself help here? And if so, how do you work with LLMs in a way tailored to your organizations’ specific processes and risk appetite?

Join this expert panel along with our partner CodeRabbit, where we’ll discuss: 

  • How code review is evolving in the age of AI
  • Where to set guardrails when there is too much code to review
  • Tactics for maintaining code quality, security, and organizational standards as AI code volumes increase

panelists:

james garrett

James Garrett

Tilt
Staff Engineer

Moderator:

Dalia Havens

Luciq
SVP of Engineering