London

June 2–3, 2026

New York

September 15–16, 2026

Berlin

November 9–10, 2026

Create space to experiment with AI in your team

Without impacting immediate velocity
February 26, 2026

You have 1 article left to read this month before you need to register a free LeadDev.com account.

Estimated reading time: 8 minutes

We’re far enough into the AI software era to make it clear that it’s time for all teams to embrace AI and figure out how to adapt it into your workflows – as an organization, product team, and individuals. 

Understanding how best to use AI requires some space and experimentation but how do you do this without impacting your immediate velocity?

I’ve already spent hours on wrong paths, prompting my way to an inept and disappointing outcome. This wasn’t ideal, but the learning from that experience was vital in forming a much better understanding of using AI as part of my daily workflow as a software engineer.

Focus on experimental projects and prototypes

My gateway into using AI was a project last year that needed some experimental prototypes built to test some fairly chunky UI changes in our product. We agreed that the risk of building these prototypes into our main codebase was high, because users might absolutely hate the changes. Instead, we built fake versions of our app as a standalone website. Now we had a great opportunity to apply AI because:

  1. The risk is low: we are only building throwaway prototypes; the quality of the code is not the primary concern, and we are not shipping these changes to users.
  2. We can divide the work: we needed multiple prototypes, so we split the work up across a few engineers and purposefully used a few different tools. This allowed us to compare notes and test different approaches.

We divided up the prototypes amongst ourselves and agreed on different approaches that we would take:

  • Some people stuck to building mostly by hand, but with an AI to help augment their workflow with auto completion and other tools.
  • Others used an AI-first editor like Cursor and let AI lead the way from start to finish. One person even tried making no edits by hand at all to see what the experience was like.
  • Because of my comfort in the terminal, I experimented with CLI tools like Claude Code or Gemini CLI as a means to have AI build my prototype and occasionally I’d dive into the generated code to make tweaks myself. I also leant into planning mode to build a specification for the prototype before asking the AI to implement it.

At the end of this work we were able to get together and reflect on our experiences:

  1. What worked well? We generally found that the people who spent more time planning the prototype implementation with the AI were more successful. Additionally, showing it a few screenshots of the prototype and a desired UI was impactful. This showed us that some upfront effort to build a specification file and an implementation plan was a time-saver overall even if it came with some up front costs.
  2. What didn’t work well? We had a few instances where asking the AI to fix a bug was unsuccessful and it ended up trying multiple incorrect solutions. Often these bugs were particularly nuanced or required complex setup (small width screen, multiple user interactions). We realized for some scenarios we either needed to connect a browser-based MCP server such as Chrome DevTools MCP, or accept that the bugs were best solved manually.
  3. Which tools worked well? We asked different people to try different tools. Some used VSCode with an extension, others stuck to CLI tools, and some tried specific AI IDEs like Cursor. This was helpful for understanding which tool(s) we should invest in for the longer term. In the end, we found that this was a fairly subjective choice, and we didn’t feel one particular tool stood out.

Although this work didn’t give us experience using AI on our primary and most important codebases, the prototypes were complex enough that it gave people the confidence to think about how they could use AI in their mainstream work.

Pick an AI tool and give it to everyone

The next step is to make sure that your team has easy access to the tool(s) of your choice. This means that you provide them with subscriptions to AI tools – or an amount they can expense on tools – so everyone has access. 

I recommend having a chosen tool that each engineer is given a subscription to, because you gain so much more knowledge when everyone is having the same experience. Pick a tool, make it available, and invest in becoming expert users of that tool. 

We know that every model is different, and prompts that work for one might not work as effectively for another, so your shared repository of knowledge is much more impactful if it’s being applied to the same model by every engineer. 

If you’re not sure on which model to choose, don’t be afraid to have a period of time where the team uses multiple. One week of prototyping is not going to be enough to land on a favoured model and most tools let you switch the model that backs them easily. 

This is a choice you should regularly check-in on too; when new models get released you should be evaluating them, having one engineer use the newest version, reading the consensus from the community and considering if a change is worth it for you. Cost is also a factor – I know folks who are still on older models because they are capable of the required tasks and cost considerably less than the cutting edge.

Have an “AI Hack week” with no expectations

One concern I and other colleagues had when being asked to experiment and adopt this new technology was on the short-term velocity impact. To give people confidence and space, we ran an “AI Hack week” with the following expectations:

  1. People should continue to work on their main projects, but there is no expectation to make significant progress on them this week.
  2. This was shared across the entire organization so everyone was on the same page. It also encouraged folks outside of engineering to also experiment with AI. We saw a proliferation of custom dashboards for metric tracking, small command line tools to automate common tasks, and plenty of data analysis backed by complex SQL queries generated by LLMs.
  3. This mandate of “use AI and don’t be afraid to experiment” allowed people to explore freely without the pressure of having to deliver.
  4. At the end of the week, people were asked to share their successes and failures. We had a round of lightning talks where people could present the most interesting takeaways, and from that we also wrote up a document listing the biggest learnings – both positive and negative – which we could take forward into our daily work.

Share knowledge, prompts, and skills

We wanted to make sure that engineers didn’t work on their AI setup in a silo. As you work with an AI more, on a particular codebase, your AGENTS.md file (or equivalent for your tooling) grows, your set of MCP servers expands, and you might even build custom agent skills for particular workflows. 

We felt very strongly that these should be contributed into a shared repository for all to improve and benefit from.

  1. Decide where to store these. We decided that the best place was within the codebase itself. In our case we created a top level `agents` folder to store it all, but you can pick any location. The key is that it’s a location that is easy to access and easy to contribute to.
  2. Decide on the structure. After some discussion, we found that the top level `AGENTS.md` file is quite personal – depending on what your work is, you likely change this fairly regularly. We didn’t want to add this file to version control and instead allow each engineer to control their own. All AI tools let you reference other files in your main file. By storing a prompt in `agents/prompts/example.md`, a developer could reference it in their main `AGENTS.md` file. This lets us build a shared set of prompts without controlling each individual’s main file.
  3. Setup a space for questions. Our final step was to make a new chat room called “AI – no stupid questions!” where we shared our experiences and asked for help. This became a really valuable space which we made sure felt safe for people to ask questions. This allowed the folks on the team with more experience to help those who were just getting started, and encouraged people to get involved rather than make them feel self-conscious.

The best teams adapt

We ended our internal “hack week” with some real success stories that drove the team forward:

  1. Multiple commits to create and edit a shared repository of prompts relevant to our codebase.
  2. Improvements to our `package.json` scripts and common commands (such as test execution) to simplify the interface and make it easier for an AI Agent to run (we found it would regularly get the command slightly wrong and fail to execute tests).
  3. People have continued the mantra of sharing successes, and multiple times have used an AI to almost exclusively fix a bug. They’ve shared the prompts and conversations they used in those bug reports, enabling all of us to learn the approaches that work best.
  4. We’ve updated (both manually and with AI) many internal documents and `README.md` files in our codebase that were outdated and misleading AI.

It is clear that software engineering as we know it is changing. Whilst the end state is unknown, it is fast becoming evident that ignoring what AI can offer is a risky strategy that will leave teams left behind. 

LDX3 London 2026 agenda is live - See who is in the lineup

Each individual and team will have their own stance on the right amount of usage of AI, but without the space to explore you will never find the optimal balance to enhance your team’s productivity and the quality of your product.