Berlin

November 4 & 5, 2024

New York

September 4 & 5, 2024

The challenges of introducing product experimentation

Moving your org from building first to testing first
September 09, 2020

You have 1 article left to read this month before you need to register a free LeadDev.com account.

Building an experimental culture to reach your goals faster

It was a few weeks before Christmas when the management team at Library Inc organised a company-wide meeting for an announcement. After two years and millions of dollars invested, they had decided to stop the development of Library Plus, the new product that was meant to be the future of the business. Whether anyone realised it or not, Library Plus had been one big experiment. The most concerning issue was that nobody was clear on why it had failed and what they had learned from it.

Advert for Optimizely Rollouts

Unfortunately, this happens all too often. In fact 95% of all new products fail, and overwhelmingly it’s because they never reach market fit due to limited demand for the product. Very often, companies like Library Inc take a “build first” approach where they spend many months building a product before showing it to potential users. This is a very risky and expensive bet to place with too many variables at play.

We were initially hired by Library Inc as product managers to work on Library Plus, but when they made the announcement we decided to focus on developing a new product opportunity. To avoid the costly mistakes of Library Plus, we set out to use the scientific method: testing hypotheses in small increments, while fostering a learning mindset. We thought that the most challenging part would be developing the new product and finding a market for it, but we didn’t anticipate the cultural challenges we would face. 

This is our story of trying to build an experimental culture in Library Inc, and what we learned along the way. 

Models for experiments

After watching the failure of the Library Plus product, we realised our first challenge would be identifying what the next most important thing to learn was, and then how to share this learning with others. 

We started by defining what we needed to achieve. For our product to be considered a success we needed to generate a minimum of £5m in annual revenue within three years. Using a traction model we broke this target into smaller individual goals.

Our first goal was to get 50 customers to pay for our product. Using a customer factory model we identified the biggest risk as customer acquisition. We then used this model to help us identify the experiments we should run to reduce this risk and start acquiring customers.

Throughout this process, we made our work visible. We covered the walls of the office with these models and the data we collected. They showed the journey we’d been on, including the failed experiments and the wrong turns that helped us learn. Our transparent approach was aimed at changing the culture and encouraging learning.

What we learned 

Developing a simple statistical model is an excellent way to establish what poses the highest risk or reward to your product. You can then develop highly-targeted experiments that are focused on specific areas.

After running an experiment, you can return to the model and update the data based on the learning you gained. This is a great way to share the results. It helps people understand how a small change can propagate through a system, creating a larger impact. This approach ensures you are always clear as to why a product change succeeds or fails.

The challenge with data

Once we had modelled our product, we needed some data to form a baseline expectation for the performance of our experiments. Although we were creating a new product, we wanted to use existing customer acquisition data from other products in the portfolio. If our experiments could outperform these products, it would be a strong positive signal.

We assumed that it would be easy for us to find this data. However, it turned out to be more complicated than we thought. Library Inc was collecting data but not consistently or reliably. This resulted in a lot of unplanned work to remove duplicates and fill in gaps before we had clean data to work from.

We were then able to launch and get some results. This is when we hit another unexpected roadblock. Even though our experiments were outperforming the targets we had set, the results weren’t received positively by all our stakeholders. In the absence of data, they had anchored their expectations too high. This left them in the uncomfortable position of having to either continue with their gut-feeling predictions, or revise their targets based on the new data we had found.  

What we learned 

Validating the quality of the data you have access to before you begin allows you to better prioritise the work you need to do before starting your experiments. Depending on your data maturity, you may need to invest in tasks such as cleaning, back-filling, or indeed collecting completely new data sets.

Bear in mind that just because you have data, it doesn’t mean people will use it to make decisions. If historically the data has been unreliable then it will take time for people to trust it. Additionally, if people are not used to making data-driven decisions, you will need to demonstrate how they can be beneficial in order for them to become a mainstream practice. 

Your competition might not be what you think

As we progressed with our experimentation and gradually refined our product proposition, we discovered something new about our competition that took us by surprise. We expected our competition to be external but soon realised that it could also come from within. This made it difficult to get the budget, resources, and approvals we needed to continue. 

Up to this point, we had run a series of lean experiments using techniques such as AdWords campaigns, landing pages, and Wizard of Oz tests. These are small, cheap, and disposable. However, it was challenging to develop fast feedback loops. Library Inc had an established place in the market which led them to adopt a risk-averse position by default. This translated to being scared to talk to customers, and having to go through lengthy approval processes to launch every experiment. 

In April, our team had to apply for more funding. We were taking the lowest-risk approach for building products, and using the smallest amounts of money as efficiently as possible. Ironically this made it harder to get investment. We were competing against products that were already established and had a more predictable return on investment. In addition to this, the budgeting process was not designed for the small, incremental funding we were asking for.

What we learned

Understand how your organisation feels about launching experiments and start the approval process early to reduce your experiments cycle time. You can reduce the risk to your organisation by getting creative with brand names to protect the reputation. Once experimentation becomes a more established practice, they will become faster and easier to run.

Know your competition, and that includes looking inside your organisation. Understand the budget process and how to present your investment case. Asking for more than what you need can sometimes be a better solution to start with. You can then treat this as your seed investment and use it to incrementally fund your experiments as you progress through your discovery. 

The risk for stakeholders

Many months had passed, and we’d achieved a lot. We had discovered and validated a new market opportunity, developed a product proposition which was gaining traction with customers, and built a team with a robust culture of experimentation. We felt we were on the path to success, but then it all came to an abrupt end. Our funding dried up, and the business decided that there was a bigger prize available by funding an exciting new AI product.

What we learned 

If you are operating within an organisation that doesn’t yet have a data-driven culture at its core, it is crucial to have the support of a senior stakeholder who understands the value and cost of lean experimentation. 

We believed that the results of our experiments would speak for themselves. However, if your stakeholders are not used to analysing data and seeing incremental improvements, they can find it hard to have confidence in this approach. It can also be risky for their career to lay out data so transparently because it creates the possibility of failing in public.

If you want to take an experimental approach to product development, you need to have the right stakeholder support for the process, not just the results. Depending on your role, you could take different approaches here. Most of us will need to build relationships to support our experimentation aspirations, but you may also be in the position to protect your team and help establish the process.

Conclusion

Every organization is unique, but hopefully, from reading our experience, you can avoid some of the challenges we faced. 

There is great value in building products through experimentation. The continual optimization, systematic testing, and incremental validation can help you avoid costly mistakes and enable you to reach your goals faster.

However, as our experience demonstrates, without the right organizational culture, you will constantly be facing challenges from within. If you want to make product experimentation a core capability, you must establish a learning culture. Only then will you be able to focus purely on the work that has the most significant impact. 

 

Advert for Optimizely Rollouts