You have 1 article left to read this month before you need to register a free LeadDev.com account.
Key takeaways:
- Execution is no longer the bottleneck – verification is: AI agents shift staff+ engineers’ value from writing code to validating system behavior.
- The spec becomes the product.
- Staff+ engineers move from influence to activation.
The role of the staff+ engineer is shifting. A few weeks ago, I was pulled into a project I knew almost nothing about. A different domain, systems I’d never touched, and people I had never worked with. The kind of situation where you normally spend your first month just reading docs and asking many questions.
Instead, I threw a pile of our internal docs at an AI-coding agent. “Read all of this. Then go look at the actual code across these 20-something repos and tell me where the docs are accurate.” That was basically the prompt.
It worked! It read Confluence pages, traced the data flows into the repos, called out where the docs were correct, and helped me build a new data strategy. I took that output and drafted a steel-thread proposal.
Then I sent it over to the principal engineers who actually own those systems, and we spent a couple of hours poking holes in it together. After that, I fed this feedback into the agent and tightened things up.
The whole process – understanding the space, building a strategy, and getting alignment – took maybe 2–3 days. This would normally take a month, maybe more.
That experience is why I think we need to talk about what’s changing for staff+ engineers right now. Not in the abstract “someday AI will…” way. The Software Development Life Cycle (SDLC) is being refactored in real time – especially with the rise of co-working agents that can read codebases, run commands, open pull requests (PRs), and iterate.
Your inbox, upgraded.
Receive weekly engineering insights to level up your leadership approach.
The SDLC is no longer linear
Some teams have already figured this out and are running multiple agents in parallel on real work, shipping things that would’ve taken weeks.
However, many companies are still operating in the old paradigm: weeks in discovery, endless planning meetings, debates over approach, alignment – before a single line of code is written.
Execution is not the scarce resource
The new SDLC is agentic and parallel. Planning still matters – honestly, it might matter more – but the timeline compresses from weeks to hours. You describe intent at a high level. Agents decompose and explore multiple paths. You steer with targeted feedback, and verification becomes the primary human bottleneck. It’s asynchronous, parallel execution.
There’s an anecdote from inside Anthropic that blew up online because it demonstrated this clearly. An engineer wrote a product spec, pointed Claude at an Asana board, and went home for the weekend. Claude broke the spec into tickets, spun up agents for each one, and they just… started building independently. When one got stuck, it ran git blame, found the right person, and pinged them on Slack. By Monday, the feature was done.
Now – that’s Anthropic. The company that builds the Large Language Models (LLMs). Your average enterprise isn’t doing this yet, but the direction is unmistakable.
The spec becomes the whole game
“I shipped code I didn’t read” is starting to sound reasonable. For strong engineers, anyway. Only if – and this is a big if – the scaffolding around that code is genuinely solid. Tests that would actually catch something, guardrails with teeth, and observability that tells you what happened and not just that something happened.
When the sheer volume of generated code outpaces what any person can read line by line, trying to read it all just makes you the bottleneck.
So, when agents can grind through implementation over a weekend, what separates a good engineer from a great one? Increasingly, it’s one thing: how good is your spec?
Staff+ engineers have spent years learning to hold messy, contradictory systems in our heads to bring clarity to ambiguity. That skill suddenly has a different outlet – a spec for an AI agent. A good spec is more like a constraint system. You’re spelling out what must not change, what has to stay true even when things break, and what’s off-limits for privacy or compliance reasons.
You’re also being painfully explicit about what “done” means once it’s running in production – the monitoring, the alerting, the rollback plan, the Service-Level Objectives (SLOs).
More like this
How is the staff+ engineer role changing?
You can finally move the things that used to be stuck
You spot a system that needs work. Maybe it’s a non-functional fix like a flaky retry mechanism. Maybe it’s a small feature that would unblock three other teams. You understand the problem. You might even know the fix, but you don’t own that codebase.
So you write up the story, drop it on that team’s backlog, and… wait. It sits there behind product priorities for weeks, sometimes months. Meanwhile, the architectural improvement that would make the whole platform better just rots in a queue.
Agentic coding changes that equation. By agentic coding, I mean using AI-coding agents that can take a scoped engineering task and generate a working implementation that you review and refine.
Instead of lobbying for prioritization, you point a coding agent at the task, give it the context and constraints, and let it build the implementation. You review the output yourself – catch the edge cases, tighten up the error handling – and then walk over to the owning team with a fully working PR instead of a backlog ticket.
The conversation shifts from “can you prioritize this sometime next quarter?” to “here’s the solution, does this look right to you?”
That’s a fundamentally different dynamic. The owning team still reviews, approves, and owns the code going forward, but the activation energy drops dramatically. You’re not asking them to find capacity – you’re asking them to evaluate a finished piece of work.
Teams are way more willing to review a working PR than to schedule an undefined story. The prioritization conversation gets easier because you’ve already absorbed the cost of implementation.
Staff+ engineers have always been expected to improve systems across org boundaries, but our main tool was influence – writing proposals, making the case, and hoping it lands on a roadmap.
Now you can pair that influence with a working prototype. You still need the relationship and the trust – nobody wants a drive-by PR from someone who doesn’t understand the system’s history. However, when you combine deep context with agentic execution, you can move things that used to be stuck.
You become the person who decides what to trust
Here’s what I think the bigger picture looks like. We used to talk about headcount as the constraint – e.g. how many engineers can we throw at this problem? That framing is already starting to feel outdated.
What matters more now is whether your org has the infrastructure to let agents work safely at scale. That comes down to things staff+ engineers tend to own:
- How clean are the interfaces?
- How mature is the platform?
- How good are the automated checks?
- Can you actually catch a subtle regression before it reaches users?
The rhythm of the work changes too. Less time spent in long alignment meetings, more time in tight loops – write a spec, let agents explore in parallel, pull the threads together, verify the output. Rinse, repeat. The staff+ engineer’s job is making that loop safe.
Your value stops being about how fast you can implement something and starts being about whether you can look at what got implemented and say “yeah, I trust this.” How do you get there? You build the test strategy. You design the eval harness. You set up the guardrails and the release safety plan.

New York • September 15-16, 2026
Speakers Gergely Orosz, Will Larson and Frances Thai confirmed 🙌
Where does this leave staff+ engineers?
None of what I’ve described means engineering is getting simpler or that we’ll need fewer engineers. My experience has been the opposite – there’s more to think about, not less. The thinking just happens in different places.
Less time writing implementations line by line. More time wrestling with what “correct” really means for a given system, writing constraints that don’t fall apart under load, and building verification that works when you’ve got more code flowing through your pipelines than any one person can review.
The staff+ engineers who get comfortable in that space first are going to have an outsized impact – changing what their teams believe they can take on.