
Date & time
17:00
Register for the panel discussion
Login or join LeadDev.com to view this content
The results are in for our first ever Engineering Performance Report, in partnership with Honeycomb, and we want to talk about them. We asked what metrics you use to measure system performance, how to quantify the impact of AI and LLMs, and where observability practices need to change to keep up.
We found that while it’s relatively easy to secure the budget for AI initiatives aimed at improving performance, the majority of organizations still don’t monitor the performance of those AI models. We also found that tried and tested code refactoring and automated testing are still seen as the most impactful ways to move the needle.
Join us as our expert panel discusses these findings and more, including:
- How observability practices need to adapt to keep pace
- Why measuring system performance is getting more complex
- The impact AI and LLMs are having on system performance