Spotting patterns in A/B testing: The difference between making and losing money
Wrongly interpreting the patterns in your A/B test results can not only lose you money, it can lead you to make changes to your site that actually harm your conversion rate.
Correctly interpreting patterns in results will mean you learn more from each test you run, give you confidence that you are only making positive changes, and help you turn any losing tests into future winners.
The results of A/B tests will generally fall into one of five distinct patterns. Learn to spot these patterns, follow our advice on how to interpret them, and your testing efforts will quickly become more successful.
To illustrate each of these patterns, we’ll imagine we’ve run A/B tests on an e-commerce site’s product pages and are analysing the results.
1. The Big Winner
This is the result we all love. Your new version of a page converts at x% higher to the next step than the original, and this increase continues all the way to Order Confirmation. This pattern tells us that the new version of the test page successfully encourages more visitors to reach the next step and from there onwards they convert equally well as existing visitors.
Next steps: It is clearly logical to implement this new version permanently.
2. The Big Loser
Each step shows a decrease in conversion rate; a sign that the change that was made had a clear negative impact. Often an unsuccessful test can be more insightful than a straightforward winner as the negative result forces you to re-evaluate your initial hypothesis and understand what went wrong. You may have stumbled across a key conversion barrier for your audience.
Next steps: Take this as a positive lesson as opposed to a failure, take a step back and re-evaluate. That said, you would not want to implement this new version of the page.
3. The Clickbait
“We increased clickthroughs by 307%!” You’ve probably seen sensational headlines like this. But how much did sales increase by? Chances are, if the result fails to mention the impact on final sales, then what they have was a pattern we’ve dubbed ‘The Clickbait’.
Test results that follow this pattern will show a large increase in conversion rate to the next step but this improvement quickly fades away later in the funnel and there is little or no improvement in sales.
This pattern catches people out as the large improvement in clickthroughs feels like it should be a positive result. However, often this pattern of results is merely showing that the new version of the page is pushing visitors through the funnel who have no real intention of purchasing.
Next steps: Whether this result is deemed a success depends on the context of the experiment. If there are clear improvements to be made on the next step(s) of the funnel that could help convert the extra traffic from this test, address those issues and re-run this test. However, if these extra visitors are clicking through by mistake, or they are being misled, you may find it difficult to convert them later no matter what changes you make!
4. The Qualifying Change
With this pattern, we see a drop in conversion to the next step but an overall increase in conversion to Order Confirmation.
Here, the new version of the test page is having what’s known as a ‘qualifying effect’. Visitors who may have otherwise abandoned at a later step in the funnel are leaving at the first step instead. Those visitors who do continue past the test page on the other hand are more qualified and therefore, convert at a much higher rate. This explains the positive result to Order Confirmation.
Next steps: Taking this pattern as a positive may seem counter-intuitive because of the initial drop in conversion to the next step. However, implementing a change that causes this type of pattern means visitors remaining in the funnel now have expressed a clearer desire to purchase and unqualified visitors have been removed. Unless you already have low traffic and don’t wish to further reduce it, you should implement this test.
5. The Messy Result
What if you see both increases and decreases in conversion rate to the different steps in the funnel?
This can be a sign of insufficient levels of data. This type of fluctuation is not uncommon during the early stages of an experiment when data levels are low. Avoid reading too much into these in the first few days.
If your test has a large volume of data, and you’re still seeing this result, the likelihood is that your new version of the page is delivering a combination of the effects from patterns 3 and 4. Qualifying some traffic, but simultaneously pushing more unqualified traffic through the funnel.
Next steps: If your test involved making multiple changes to a page, try testing the changes separately to pinpoint which individual changes are causing the positive impact and which are causing the negative impact.
The key point to take away is the importance of tracking and analysing the results at every step of your funnel when you A/B test, rather than just the step following your test page. There is a lot more to A/B testing than simply reading off a conversion rate increase to any single step in your funnel. Often, the pattern of the results can reveal greater insights than the individual numbers.
Next time you go to analyse a test result, see which of these patterns it matches and consider the implications for your site.
- » Will the SaaS revolution stifle digital marketing innovation?
- » #DMWF Europe: Crimson Hexagon on the endless possibilities of social data for marketing
- » Privacy-browser Brave launches GDPR ad tech ‘test case’ against Google
- » Deloitte buys Magnetic Media’s AI platform, bolstering martech offering
- » Complaints to the ICO have doubled since GDPR came into effect