The fact is that they have not been careless; they have intuitively made smart, business-driven tradeoffs.
Allow me to explain this statement. Figuring out what your testing strategy is is not a simple matter. There are excellent resources out there that provide guidelines, but unfortunately, these are usually presented as the approach everybody should take. In my experience – assisting 45+ products of all sizes in the last five years – there is no silver bullet or playbook. It just depends on your company or product's current phase.
'As your product evolves, so should your testing strategy.'
Each company or product goes through different stages. Each step has different goals and challenges. The tools and techniques to consider definitively varies.
A product starts in the Explore stage (sometimes called Validation) where a team tests different ideas as fast and economically as possible.
The market determines which idea is a hit, and then the product is transitioned to the Expand stage.
This second step opens a short window where the team needs to seize the opportunity and capture the market while establishing the product or service value and quality.
If successful in the previous stage, the product moves to the Extract phase. Time moves at a regular pace, and the game is economies of scale. Cost reductions, optimizations, deduplication, and other previously dismissed goals take the team's focus.
Tip: If you work in a large company with many products, focus on your own product. If you’re a product owner and one dev in a 10,000 employee company, you still function as a startup.
Now, take a minute and reflect. What phase, or stage, is your product at?
Let's divide the testing practices and tools into two groups:
- Focused on business and user
- Focused on performance, security, legal etc.
In this article, we will focus on preserving functionality and features (business/user concerns) across time and code changes. We’ll go over each technique, understanding when it should be applied, and then outline useful tips, challenges you may face, and a strategy for execution.
Manual regression and exploratory testing
In this scenario, you’ll have an actual human being using the actual product fulfilling different use cases across different conditions.
• Leverage dev tools in your browser to simulate network conditions, CPU throttling.
• Use real devices if possible.
Running a manual regression with every code change is expensive, and it does not scale.
• Virtual Machines with different browser versions
• Explore. At this point, different proofs of concepts are being created weekly that focus on getting market feedback, and manual testing by developers, business people, and other users is a handy tool to combine with the automated tests that cover the essential flows and features of our product.
• Expand. Manual testing becomes expensive as products grow and more changes are introduced. At this stage, automation should help cover more substantial ground. Instead of having scripted, parameterized manual test cases, the tester explores and researches the application.
• Extract. This is the same as Expand. You should also consider property and model-based testing to scale some of the exploratory aspects in an automated fashion. Even then, I would still keep a percentage of exploratory manual testing.
Automated end-to-end (E2E) in non-product
Validate the system. The app should work as expected when we stitch all layers and parts together. To accomplish this, use a tool to simulate a user's behavior, i.e. clicking and tapping on different parts of the product.
- Cover all essential flows first and think about your product's roadmap or marketing page. If you’re Amazon, the first thing you want to test is for a user to choose and pay for a product. For a smartwatch you’d look at feature specs: recording heartbeats, recording exercise reps, viewing who is calling, telling the time, etc.
- Simulate the Production environment as close as possible: • Leverage containers • If feasible, provision a PROD like a database instance with production-like data • No mocking whatsoever • Leverage analytics and a definition of browser/device matrix to determine platforms to run your tests across • If testing mobile, prefer real devices versus simulators
E2E tests are expensive to run. The feedback time could be high as it takes a long time to execute across all use cases, and browser's versions, operating systems, devices, resolutions, network conditions, CPU, etc. We can fix this by running multiple machines, containers, VMs in a parallel fashion. If we do so, we trade execution for infrastructure costs. To mitigate this you could try:
• Breaking your tests into suites so that you can run the most critical tests first and get feedback within 10 mins of committing the change. If that passes, the CI should trigger secondary test suites.
• Run your first test suite simulating your median user (browser/os/resolution/network/CPU). Consequent runs can cover permutations.
• Option 1. Cypress + Cypress Testing Library • Option 2. TestCafe + TestCafe Testing Library
• Explore. Cover critical flows across critical platforms/conditions. Do not cover edge cases or variations of specific components.
• Expand. Cover at least the critical flows across critical platforms/conditions based on defined support matrix and analytics. Do not cover edge cases or variations of specific components.
• Extract. Cover at least critical flows across critical platforms/conditions based on defined support matrix and analytics. Do not cover edge cases or variations of specific components.
Note: Critical can be defined using the 80:20 rule, pick the 20% use cases representing 80% of your userbase.
Automated end to end in prod
Automated end to end in production uses the same principle as end-to-end in non-prod but applies it to the prod environment. While we try to simulate the prod environment as much as we can, there is one element we can't simulate: our users – hundreds, thousands, millions – using our app all at the same time.
• The challenges for this are the same as E2E non-prod tests costs
• Changes might need to be made to the backend layer.
Imagine you are Amazon, you’ll want to simulate a user selecting and paying for a product. If we run our E2E tests in production, what happens with our transactions? Do we use a fake product? Do we use a fake payment method? Do we need an offline process that runs afterward and cleans up our environment, so we don’t impact analytics? These considerations will force us to introduce a change in underlying systems. While I don't like changing the implementation because of a test, I see a high ROI in this case.
• Option 1. Cypress + Cypress Testing Library
• Option 2. TestCafe + TestCafe Testing Library
• Explore. Cover only critical flows across critical platforms/conditions. No edge cases or variations.
• Expand. Cover only critical flows across critical platforms/conditions. No edge cases or variations.
• Extract. Cover only critical flows across critical platforms/conditions. No edge cases or variations.
Note: Critical once again can be defined using the 80:20 rule, pick the 20% use cases representing 80% of your userbase.
View integration testing
Here, you’re validating a full page or group of elements representing valuable, releasable, feature work.
• Test your components simulating the user. Avoid leveraging implementation details such as the component's API. For example, update the state by tapping on the component instance's setState or props like onChange.
• Do not mock your client making HTTP responses and don't mock window.fetch. Instead, simulate the server.
• Do not waste time on code reviews asking team members to remove tests if you suspect duplication or overlap of scope between unit, integration, and E2E tests. Somebody already invested the time, and the scope lines are always blurry and a matter of interpretation. Move forward.
• Like unit tests, the question is what is the scope of these tests? Should we represent all possible scenarios? Should we include all possible permutations given our components variations? Are we duplicating unit tests or end-to-end validations?
• Mocked HTTP responses can get out of sync with the actual API response.
There are two strategies that I would consider:
Option 1. Leverage the unit test's toolbelt and add something to simulate our API servers.
• Jest (test-runner)
• Testing Library adapter (ReactTestingLibrary, SvelteTestingLibrary, VueTestingLIbrary, etc.)
• MSW if you are only concerned about testing.
• API Virtualization (Hoverfly, Mountebank) if you also would like to reuse this mocked server for the development environment as well.
Option 2. Leverage the E2E test's toolbelt and add something to simulate our API servers Cypress as our user simulation library.
• API Virtualization (Hoverfly, Mountebank) if you would also like to reuse this mocked server for the development environment as well. These tools also have a record/replay option so you can keep the responses in sync.
• This option has the advantage that it can reuse the same tests for both View Integration and end-to-end testing by turning API simulation off/on.
• Explore. At this stage, ideas are being released as fast as possible and as such, I would not worry about view integration testing. I would focus on E2E tests instead. Additionally, if you go with option two above, then you don't need to worry – you can reuse the same E2E test with the mocked server.
• Expand and Extract. Focus on covering variations of a group of components or a page which are not covered in E2E.
In this context, you’re validating an isolated part that works as expected.
Test your components simulating the user. Avoid leveraging implementation details such as the component's API. For example, updating the state by tapping on the component instance's setState or props like onChange.
The biggest challenge here is focus and scope – what to test and what not to. You could say that all elements deserve a unit test and be correct. The question is more about the return on investment (ROI). Can we create other tests that will prove the same validation? It is an opinionated and ever-changing topic.
Jest (test-runner) + Testing Library + Library adapter (ReactTestingLibrary, SvelteTestingLibrary, VueTestingLIbrary, etc.)
• Explore. At this stage, we are releasing many ideas as fast as we can. Focus on E2E tests first. Write unit tests for reusable components, reusable logic, or to cover edge cases.
• Expand and Extract. Apply unit testing on reusable components, logic, and to cover edge cases. Beyond this baseline, make a team decision based on your testing strategy (keep the baseline, include specific validations only and let the integration and end to end take care of other concerns OR all-in).
Unit, integration and E2E boundaries and balance
The automated test's boundaries and balance are a team's decision. In my personal experience, each approach has its tradeoffs.
Having ten thousand unit tests on a six-month project might give a false sense of security. In reality, the application might be as fragile as the system. The lack of robust View Integration and E2E tests is a significant warning.
On the other side of the spectrum, running 300 E2E tests will increase the feedback loop. I've seen teams getting feedback after a four-hour test suite execution. I've seen teams running their regression overnight as it was going to take eight hours. This defeats the purpose. So many changes have gone in after the change that they have broken the build. Now we need to spend time to revert everything and hope for the best.
We need to have a balance across all these different layers. Each team will need to define, progress, reflect, and improve. Here is an illustration of the examples provided above.
While the following two methods are automated as well, I've separated them into their own section as they are not validating business features. This will help us avoid defects that can cause our application to break.
You are statically analyzing your code to find problems quickly.
There are too many options available in the configuration. Instead, leverage the ‘out of the box’ recommended ones, and override others based on your needs.
- ESlint + IDE plugin
• Explore, Expand, and Extract. I suggest disregarding what stage your product is in, and leverage static analysis tools. The setup for these tools takes a few minutes, which means the return on investment is high.
Static typing has its learning curve, and it does consume some time when being applied thoroughly.
- Typescript (TS)
- Explore, Expand, and Extract. If ROI depends on multiple variables. If your team is versed in TS, go for it. If your organization believes the extra effort pays off, go for it. It is a very opinionated topic, and I take the team's opinion over industry trends any day of the week.
The extra mile
The above tools and tactics apply to the majority of the scenarios. I’ve included a few below which have some more specific use cases.
• If you are working with 3rd party APIs, consider API Contract Testing.
• If you deploy the UI on a Server, consider load testing.
• If you would like to automate and scale a part of the exploratory testing, consider Property-Based Testing and Model-Based Testing.
• If you are running on mobile devices, consider Battery Test and Memory Test.
A different perspective
You might also want to review your strategy based on a related dimension: a product's life span.
If you are building a proof of concept, you might want to apply the same techniques and tools as the Explore phase.
If you are building a seasonal game or application, which is made in 45 days and might last 3 to 6 months in the market, the Explore playbook might be the right one.
If you are building a product that will endure in the market or that your customers would rarely trade – given its cost, migration risk, or work involved – then I would go for the full cycle of Explore-Expand-Extract following the product's phase.