London

June 2–3, 2026

New York

September 15–16, 2026

Berlin

November 9–10, 2026

OpenAI says “there are easily 1,000x engineers now”

Why not a million?
March 19, 2026

You have 1 article left to read this month before you need to register a free LeadDev.com account.

Estimated reading time: 5 minutes

Key takeaways:

  • AI agents like GPT-5.3-Codex have evolved from “autocomplete” to autonomous task owners, moving the engineer’s role from writing manual code to high-level judgment, delegation, and “system orchestration.”
  • OpenAI leaders suggest agents enable “1,000x engineers” by automating the “inner loop” of coding, testing, and debugging, with Codex even generating over 90% of its own application code.
  • As code generation becomes cheap, the focus shifts to the “outer loop” – validating user value, scaling security reviews, and maintaining human oversight in hybrid human-agent teams.

But, if coding agents like Codex can write 90% of their own code, what is left for engineers to do?

OpenAI’s coding agent, Codex, has made a huge leap forward in capability, transforming not just the way engineers work at OpenAI, but the very role they play.

Organizations have been chasing the mythical ‘10x engineer’ for decades, but thanks to agents such as Codex, Venkat Venkataramani, VP of application infrastructure at OpenAI,  believes “there are easily 1,000x engineers now. I don’t even know if that’s the limit. There may be 1,000,000x engineers coming.”

He sees the concept of a ‘10x engineer’ shifting from an individual who codes 10 times faster than their peers, to a “system orchestrator” who uses AI agents to achieve 10 times the impact.

He sees this happening via effectively directing and channeling a vast number of software engineering agents. While many of the frameworks and components necessary for doing this efficiently have yet to be developed, these are ultimately software tools that help build the required scaffolding.

Now, what can be learned from how the coding agent has changed the way software is built within OpenAI? And what are the implications for any organization outsourcing coding tasks to an AI agent? We spoke to three members of the OpenAI team to find out.

What is GPT-5.3-Codex?

Codex works by understanding the codebase and executing tasks like writing, testing, and debugging code, while interacting with tools and systems like a developer. 

In February 2026, OpenAI shipped GPT-5.3-Codex, days after launching the Codex desktop app. This is a coding‑trained GPT model that plans tasks, takes action, and reviews its own progress. The model gathers repo context, plans, executes real actions like editing files and running tests in a sandbox, and iterates until the task is done. 

The real innovation is orchestration – tool use, state management, and context control – shifting it from a glorified autocomplete, to scoped task ownership, with humans reviewing the result, rather than micromanaging through tabbed completion.

“The big change is that, instead of working on one thing at a time, developers are now working on multiple things simultaneously. Much of the work now focuses on judgment, delegation, and parallelization, rather than being extremely single-track as before,” said Sulman Choudhry, head of engineering for ChatGPT. 

Codex builds itself 

When OpenAI released GPT-5.3 Codex, the technical documentation claims that: “GPT-5.3-Codex is our first model that was instrumental in creating itself. The Codex team used early versions to debug its own training, manage its own deployment, and diagnose test results and evaluations.”

An Open AI spokesperson says the team estimates that Codex generated more than 90% of the code in its own app.

“The engineer’s role is very much changing from writing very artisanal code, to just having really good judgment on the quality of the output,” Choudhry explained. “Engineers are still in the loop, but are operating at a much higher level of abstraction than they used to before.”

It’s helpful to think about software development operating in two loops – the inner loop, where engineers write, test, and ship code – and the outer loop, where the team validates whether that work actually delivers value to users. 

With tools like Codex, the inner loop has become so fast that it’s no longer the bottleneck, Sulman explained. 

“Codex greatly boosts engineers’ productivity by making code generation extremely cheap. This lowers the cost of experimenting, allowing engineers to test ideas directly rather than only reasoning abstractly. As a result, both the speed and quality of decision-making and engineering outcomes are improving,” Choudhry said. 

The real challenge now is the outer loop: making sure all the efficiency gains in coding translate into meaningful impact, solving the right problems, and driving outcomes that matter, he added.

“[Engineers] can go after a really complex problem and solve it. They might need two other people to help them do this, but they can own it rather than just being a member of a big team that only plays a small role,” Andy Glover, a member of the technical staff at OpenAI, added. 

Hybrid theory

All three engineers were aligned that the goal of these tools isn’t to replace humans, but to have both working together in a hybrid approach.

“Every team at OpenAI — it’s not a human software team. It’s a hybrid. Humans and agents working together. So now how many humans to agents do you need to be effective? We haven’t found that sweet spot yet,” Venkat  admitted.

It’s not just the engineer’s role that is evolving. While some in the industry argue that the skillset has to change, it’s an engineer’s attitude toward AI that will truly set them apart.

OpenAI is looking for engineers who are AI-forward or AI-native – people who lean into AI as a core part of how they work. These engineers don’t just use AI occasionally; they integrate it deeply into their workflow, using it creatively to tackle complex problems and move faster.

“This is because the role of an engineer is evolving from being an engineer to acting more like a builder,” Choudhry explained.

AI tools like Codex are becoming more important, but the fundamentals still matter: engineers need strong coding skills, an understanding of algorithms and systems thinking.

The new bottlenecks

“You have to fundamentally embrace the fact that, prior to generative AI, humans were a bottleneck in writing code. We spent most of our time reading code,” Glover said. “Now code can go into production immediately, so you have to start looking at code review and security review, and how we roll this out safely. We still don’t want things to break.”

Choudhury echoes Glover’s sentiments  – noting that at OpenAI, engineers face their own bottlenecks in scaling CI/CD, code reviews, and tests. AI accelerates code creation – but validation processes still lag behind.

Choudhury explained that AI can help reduce bottlenecks by identifying which parts of the code a change actually affects – allowing the team to run only relevant tests and review less code, rather than checking the entire codebase.

At OpenAI, engineers experiment independently – share successful solutions with the team – and formalize the best into official tools.

“One piece of advice is to use AI and build tooling. The cost of building tooling is so low that you can build specific, bespoke solutions to solve many of your problems,” Choudhury said.

LDX3 New York lineup