You have 1 article left to read this month before you need to register a free LeadDev.com account.
Your inbox, upgraded.
Receive weekly engineering insights to level up your leadership approach.
Estimated reading time: 6 minutes
AI coding assistants have significantly lowered the bar of entry when it comes to creating new code. But this is only half of the picture.
To truly get a return on investment (ROI) from AI in software development, the rest of the software development lifecycle (SDLC) must be accelerated, too.
“AI came for code generation first because it was the easiest problem to solve,” says Charity Majors, CTO at the observability platform Honeycomb. However, there are many other DevOps-related problems, more challenging and meaningful to solve, that can benefit too. “We have a lot of toil in engineering, and AI can help with that,” she said.
DevOps, the cultural movement integrating development and operations, once dominated engineering discourse but now feels less front-and-center. It encompasses the tasks involved in moving code from development to production environments, spanning software lifecycle actions, like code reviews, testing, and continuous development and integration (CI/CD), and more. To date, these operations that interface with production environments are an underserved area for AI.
The big takeaway, says Majors, is that it’s time to focus on production. “The combination of AI and production – there’s the real potential.” But what does this mean in practice? The techniques include implementing quicker feedback loops on testing production code, reinforcing code ownership, and not letting fundamental coding best practices go by the wayside.
AI and waning code understandability
As AI-generated code increases, so do risks to long-term maintainability. Recent industry studies indicate that AI-generated code leads to code bloat, security risks, and increased time and effort to debug. Majors is unsurprised. Having AI write code is the easiest part, she says – the real work is maintaining that code once it’s in production, and making it understandable and discoverable for other engineers.
The way most platform engineering teams operate, says Majors, is that when something breaks, the alarm goes off, and they track down the person with intimate knowledge of the code. But with AI having written the code, what happens when no one is an expert and the origin is unclear? Decisive action is under threat if developers know less and less about the code they create.
In observability practices, tracing down errors (like outages, bugs, or vulnerabilities) to their root causes was already tricky. Now this issue is compounded because unless you have a mature observability practice and very fine-grained tools, you can’t actually dissect and understand code of unknown origin in production.
According to Majors, the rush of opaque AI-generated code exacerbates this need to constantly diagnose errors. “So much of what we’re seeing with AI stuff is our sins coming home to roost.” In other words, the easy wins of AI today may result in harder-to-maintain code tomorrow.
More like this
Tight feedback loops aid code ownership
The reality of instant AI code generation is also affecting code ownership. Today, code authorship is becoming less important, and code ownership is becoming more critical. “You need to own your own code,” says Majors. “This is more important than ever.”
The best way to foster that ownership is by keeping feedback loops tight. The quicker developers can test their code in production settings, the more productive they stay. “You’ll never know more about the code than while you’re writing it,” she adds.
But AI’s acceleration of engineering processes is bleeding into areas like code maintenance and deployment, which are now happening on the fly. “There’s less forethought going on, so things are happening in production more,” Majors says. “This is the tide – everything is moving in this direction. But without preparing, you’ll be in firefighting mode constantly.”
To put it another way, code maintenance and deployment are increasingly happening on the fly. To keep pace, teams need to rethink their release workflows. Canary deployments, feature flags, and rollbacks can help release iteratively and quickly validate AI-driven changes in smaller, safer increments.
Keeping skills sharp
Majors sees AI coding assistants as having the potential to upskill junior developers in areas beyond simply writing code. This includes using AI for system health checks, code analysis, refactoring, rewriting an API, or JavaScript migrations. By spending time with AI, junior engineers can gather more context and craft better prompts over time, she said.
Yet, she quickly acknowledges the downsides. “When you overly rely on AI tools, when you supervise rather than doing, your own expertise decays rather rapidly… AI shouldn’t make you weaker. Workflows should make you better at your job over the long haul.”
As such, developers shouldn’t forgo fundamental coding best practices just because they have this newfound agility. Majors implies that new workflows (like interactions with advanced auto-complete or AI agents, and the tighter feedback loops) should reinforce continuous learning for the long haul.
“We’re starting to develop workflows that make you better at your job the longer you interact with AI.” She adds that at Honeycomb, they’re investigating ways to not just get the AI to do it all for you, but to coach you, like a checklist.
Proving the value of AI investments
For AI to truly facilitate the entire SDLC, you must monitor its impact. But, how do you gauge the value of new AI investments from a real-world cost perspective? What indicators are valuable to track? According to Majors, subjective responses, like gauging an engineer’s trust in AI tools, are sometimes the best signals you’re going to get.
Another critical value to prioritize is system resilience. Talking from her history with site reliability in distributed systems, Majors acknowledges that all systems eventually fail. “It’s not a question of if, it’s when and how.” This is why setting and maintaining service-level objectives (SLOs) is so important. Reading these signs, along with regularly stress-testing systems, will encourage more resilient postures.
In the twilight of DevOps
Majors acknowledges the operational constraints most enterprises face today. “The majority of software engineers don’t have access to a world-class site reliability engineer.” It’s largely accepted that engineers own their code in production, and few organizations are spinning up new developer and operations teams simultaneously.
“I feel like we’re in the twilight of the DevOps movement,” she says – not because it’s no longer relevant, but because its core cultural debates have been settled. “It does not mean DevOps has been achieved everywhere by any means.”
“This is why I love the platform engineering concept,” she adds. While DevOps may no longer dominate the conversation, Majors sees new energy around platform engineering and AI-assisted operations as teams adapt to faster, more autonomous software delivery. “This means a sea change in how we think of the whole career and role of software engineers.” In other words, platform engineering shifts the responsibility to engineers to own production systems like never before, rather than strictly separating operational maintenance into DevOps or SRE silos.
As responsibilities shift and engineers are expected to own their production code, platform engineering could be a helpful method to support success amid AI-driven code development, since these internal platforms can help scale operational best practices across the SDLC.
The future: AIOps for the lifecycle
As opposed to creating more bloat with AI-generated code, Majors believes we should lean on it more for AIOps – using AI tools or agents to augment operational work like debugging, observability, resiliency testing, infrastructure troubleshooting, and more.
While AI tooling is still maturing in the operational side of software development, the potential is clear: real ROI won’t come from generating more code faster – it will come from streamlining the lifecycle around it.