London

June 2–3, 2026

New York

September 15–16, 2026

Berlin

November 9–10, 2026

Nobody knows what programming will look like in two years

We've been here before.
February 18, 2026

This is your last article that you can read this month before you need to register a free LeadDev.com account.

Estimated reading time: 10 minutes

“Is it time to leave the industry?” Over the past two years, I’ve heard this question repeatedly while mentoring developers and speaking at universities and conferences. It’s something I never encountered in my previous 30 years in tech.

The more senior developers tell me they “feel like a dinosaur”. Some have admitted to me that they just don’t find augmented coding interesting. Others are grappling with the ethics. Generative AI is built on large-scale theft, and has many other ethical issues at its core. It is also extremely bad environmentally.

A lot of the people I’m talking to feel isolated, assuming they’re the only ones struggling with these concerns, or with difficult job searches, or suddenly feeling uncomfortable in their current roles. The truth is, it is hard out there for everyone, and whilst generative AI isn’t the whole reason, it is a major part of it.

Professionally I’m a programmer, a writer and a composer. What connects all of those is the joy of making things. I love the intellectual challenge of taking something that only exists in my head and making it real. I’m thrilled when someone tells me a piece of music I wrote helped them through a difficult time, that an article or book I’ve written helped them grasp an idea, or that something I’ve built with code made their job a little less frustrating.

I feel as if that joy of making things is being taken away. At the AI for the rest of us conference in London last summer, I had conversations with people who told me they were grieving. Yet some software engineers and thinkers I deeply respect – Kent Beck, Gene Kim, and Adrian Cockcroft for example – seem to have embraced the new tools with genuine enthusiasm.

From punch cards to AI: a personal history of technology transitions

This isn’t the first time such a shift has happened in our industry.

My mother-in-law, Lesley Finne, was employed as a programmer for ICT (international computers and tabulators, which later became ICL) on punch-card systems in Kenya in the 1960s.

“I never really worked directly on the punch cards,” she told me. “We had a whole lot of dedicated staff in the punch room to do that, but I knew how to do it – slowly – so that if there were just one or two cards to do I could do it myself. I wrote the code on specially printed forms and then handed that to the punch room. I worked on the ICT 1500 mainframe, mainly in FAS (fifteen hundred assembly system). I did do a bit of COBOL as well but didn’t like it very much – too long-winded!”

Time on the mainframe was a major constraint. All the regular weekly or monthly jobs, like payroll or stock control would take priority. “If any of us needed to run a test programme we had to book time when there was a gap,” she said.

It was interesting, highly-skilled work, but by the mid 1980s computer terminals with screens and keyboards made punched cards obsolete. Engineers were able to type their COBOL (or FORTRAN or whatever) commands directly into the computer’s memory.

Petter, her husband, brought an early home computer back from the States in the early 1980s with great excitement. “He expected me to show him how to work it, but I didn’t have a clue. It was totally different to anything I’d ever worked on, so he just sat down and taught himself. ”

That shift is my generation. I learnt BASIC and assembly programming on home computers in the 1980s in my teens (the BBC Micro and Commodore 64). Although I’ve never seen a punch card outside of a museum, I understood how to handle register allocation and memory. By the time I started coding professionally, C had made those skills redundant, and Java got rid of the need for manual memory management. What remained valuable were the principles I’d learnt – my understanding of what a computer actually did. Ubiquitous access to hardware however meant the new constraint was people who knew how to write code, and I did.

With this latest shift, we all need to work out which of our current skills still have economic value if we want to stay in the field. However, as creator of Extreme Programming and pioneer of Test-Driven Development Kent Beck observed on stage at YOW! in Sydney in December, no-one knows yet. “Even getting to ‘it depends’ would be progress,” he told attendees, “because we don’t know what it depends on yet, and we all need to explore this space together in order to find out.”

A rough shape is starting to emerge however.

We’re all explorers now

Beck’s 3x model, which I think of as a variation on Simon Wardley’s Pioneers, Settlers, and Town Planners, suggests product development moves through three phases: 

  1. From exploration – the risky search for a viable return for a viable investment
  2. To expansion – the elimination of bottlenecks to growth
  3. To extraction – where profitable growth continues.

One of the lessons of 3x is that people have a natural home in one of the phases. As a discipline, programming has been in extraction for some time. The discomfort many of us feel comes down to having been picked up from extraction and dumped in exploration.

“Programming hasn’t really advanced since Smalltalk-80,” Beck said. “The workflows, tools and languages that we use are all small tweaks to a foundation that was laid down in the late 1970s and early 1980s. So the act of programming has lived in extract for 45 years and we’re used to that,” he said.

Then the genie of generative AI coding assistants escaped from the bottle, “and all of those certainties have been thrown out of the window,” he said.

Exploration doesn’t look very much like engineering from the books. “It’s about cutting corners to get answers, throwing away what you’ve done, starting over, being creative, sniffing out opportunities,” Beck said.

The genie is good at exploration. So good in fact, that Atsign engineer Chris Swan wrote “getting from idea to code is cheaper than it’s ever been.” But good ideas are a new constraint. “The set of people who have ideas for apps worth building is not everybody. It might be a larger set than the people currently making apps, but it’s finite, and probably not much larger,” Swan suggests.

However while AI tools can tackle very complex coding problems, it is less effective at the expansion and extraction phases. It makes too many mistakes, and those mistakes can be subtle and hard to detect.

At its core, this is a validation problem. A challenge however is that the validation can’t be done by other humans reading AI-generated code any more than it can be done by all your tests turning green.

I expect to see a lot of work going into trying to make automated validation tools that work well. Formal methods might be a way to achieve this, but I think it is more likely we’ll build on ideas around observability and testing in production that have become mainstream over the last decade.

Can GenAI help us increase optionality?

For Beck, another interesting question is whether augmented development gives us new tools to increase optionality as we add features to our products.

At the beginning of a project, when the code base is small, you have a huge range of options, because adding a new feature is straightforward. The codebase still feels flexible.

Each time you add a feature you reduce the number of options. “If we look at this over time, we start out with lots of options and no features, and as soon as we build a feature we remove some of our optionality… What I’ve been observing is the natural tendency of people writing software to jump to the next feature, burning more and more of the optionality until eventually we get down to no more options.”

In Tidy First? Beck explores software design as a way of adding options over time. Tools can also increase optionality. For example, automated testing and deployment means you can go from idea to production in hours or days instead of weeks or months. Smaller, incremental changes are easier to test and rollback if needed, and when experiments fail (as they often do), you haven’t invested months of work.

Beck believes generative AI could be another tool to increase optionality. If writing the code is almost instant, he suggests, we can take time between features to refactor and make improvements. “I can think about everything that might increase the optionality and add it in before I build the next feature,” he said.

What actually matters now?

It is of course possible that the economics of generative AI never work out. But I’m not convinced that’s a good bet, and it will never be easier or cheaper to learn how to use these tools effectively than it is now, while the domain is still small and the companies are all running at a loss. So assuming augmented coding is here to stay, what certainties do we already have?

A lot of things haven’t changed. Knowledge of programming language syntax, and typing speed, matter less than they did, but they weren’t that important anyway.

Soft skills like communication, critical thinking, documentation and networking remain as important as they ever were.

Operational excellences remain a durable differentiator, so expertise with observability, and excellent SREs are likely to remain in high demand.

If you are at the exploration phase, the AI makes the time needed for junior developers to become effective much shorter. Contrary to the popular LinkedIn narrative, there has never been a better time to hire juniors.

However, many of the skills that become more relevant as you move from exploration to expansion matter as much as ever. These skills will likely increase in value if fewer people who grow up with augmented coding have them.:

  • Knowing what a computer actually does. If you don’t understand memory, I/O, concurrency, or why some operations are cheap and others aren’t, you can’t tell when the AI has generated something that will fall over in production.
  • Reading code critically. Manual verification isn’t enough, but you do need to spot when generated code is subtly wrong. Does it handle edge cases? Will it perform badly at scale? Does it introduce security holes? Is it using a deprecated library?
  • Testing and validation. Knowing what tests to write, what cases matter, and how to verify behaviour. AI can generate tests, but it can’t tell you if they’re the right tests or if they actually cover the risky parts.
  • Domain knowledge. Understanding the actual problem you’re solving. AI doesn’t know your business rules, your users’ workflows, or why certain constraints exist. It will confidently generate code that does the wrong thing correctly.
  • System design and architecture. AI is decent at implementing a function you describe. It’s terrible at figuring out how the pieces should fit together, what the boundaries between services should be. It also isn’t good at designing something that won’t become unmaintainable in six months. The latter may be less important if code becomes ephemeral, but I think we’re quite some way off that.
  • Debugging and diagnosis. AI can certainly be useful as a debugging tool, but I suspect only in tandem with other techniques and knowledge. You’ll still need to know how to read stack traces, use a debugger, check logs, and reason about what went wrong.

Finally, being able to identify good ideas that can be solved with software is likely to be at a premium.

Final thoughts

If you are feeling anxious, keep in mind that nobody has this all figured out yet. 

Kent Beck doesn’t know what programming will look like in two years, and he’s been thinking about this longer than most of us. But to me it seems pure fantasy to imagine that generative AI will make everyone a programmer. As with other tools – Visual Basic say, or Microsoft Access – the number of people who can write prototypes in code effectively may well increase, but probably not dramatically. Like SREs, programmers will stay in high demand, even if the tools we use are changing.

LDX3 London 2026 agenda is live - See who is in the lineup

The developers who made it through punch cards to terminals, or assembly to high-level languages, weren’t the ones who pivoted fastest. They were the ones who took time to figure out what actually mattered.

If you’re uncomfortable right now, or grieving, or just deeply unsure whether any of this is worth it, you’re not falling behind. You’re paying attention. And that careful, skeptical attention (the refusal to just accept the new thing because it’s new) might be the skill that matters most of all.