
Date & time
17:00
Register for the panel discussion
Login or join LeadDev.com to view this content
Code output is scaling faster than delivery capacity, yet software teams are keeping the same deployment and review patterns, yet are surprised when release sizes creep up, reviews get noisier, and coordination costs climb.
In our recently published State of AI-Driven Software Releases report, in association with Harness, we found a clear divide between organizations clinging to existing processes, guardrails, and tools to release AI-assisted and generated code, and those who recognize the need to change and experiment to avoid new bottlenecks emerging.
Join this online panel session to work out which camp you sit in, and the steps you can take to modernize your release processes to match AI speed. You’ll learn:
- How to rethink code review in a world of more and larger software releases
- Why 57% still use “human-in-the-loop” review for every line of AI-generated code
- The new generation of guardrails, and why only 49% have them in place

