Improve Website KPIs with Better A/B Tests Using The Scientific Method

July 14, 2022
two computers

Introduction 

Most website A/B tests fall flat and don’t produce a conclusive result for a number of reasons, a big one being the test ideas are just not impactful enough. They’re not making the right copy or design improvements to change user behavior and increase conversion rate. Most of the time, we’re guessing what to change and how to change it - we’ve all heard the classic test idea of, “we should try changing the button color”, but with no data or research to back it up. It ends up wasting time and money on a test that yields no significant increase in conversions.

Businesses need a research method to come up with better A/B test ideas. It’s been my experience that by using the same scientific method we all learned as kids in school, we can come up with better test ideas that are more likely to result in a win and increased website conversions. Most of the hard work is behind the scenes doing the research, but this process is worth the time and effort.
 
The CRO (Conversion Rate Optimization) test & learn process mirrors the scientific method exactly… because it is an application of the scientific method. I’ll outline the method with a basic example and then move on to touch on how each applies to CRO specifically:

  • Observe: “The toaster won’t toast.”
  • Question / Research: “Why won’t my toaster toast? What makes a toaster toast?”
  • Hypothesize: “If I plug the toaster into a different outlet, then it will toast the bread.”
  • Experiment: “Plug the toaster into a different outlet and try toasting again.”
  • Analyze Results: “My bread toasts!” The hypothesis was proven true.
  • Apply Discoveries: Use the other outlet to toast bread from now on.

Observe

“The toaster won’t toast.”

In this stage of the CRO process, we’re faced with a problem. It’s likely an obvious problem that doesn’t take a CRO expert to discover. Most of the time, it’s something like “My website is getting lots of traffic, and we’re selling good products, but very few people are buying.” 

Sometimes, when you don’t have obvious problems that need to be addressed, you’ll want to get into your analytics dashboard and start searching for points where conversions are underperforming. A few example starting points include:

  • “Do returning users convert at a higher rate than new users like we expect?”
  • “What’s the ecommerce conversion rate for purchases of product X?”
  • “What’s the free trial sign-up rate for the entire website?”
  • “Do email subscribers convert better than other users?”

Question / Research

“Why won’t my toaster toast? What makes a toaster toast?”

In the research stage of the CRO process, we use a number of tools to understand the cause of our conversion problem. When we dug into analytics, we found what is going wrong, but now we need to gain some insight into why. 

Let’s say your landing page that asks people to sign-up for a free demo has a very low conversion rate compared to the industry average. We can use a few tools to help diagnose what the issues might be:

  • Heuristic analysis: Using principles of web design & copywriting for conversion, analyze the page element by element to understand how each might be negatively affecting your goal of a free demo.
  • Heat mapping: Use a heat / click mapping software to get a visual on what users are actually doing on the page. Are they clicking in areas you don’t expect? Do they scroll down far enough to see your main CTA?
  • On page surveys: Deploy a survey on the page to get feedback from users on how to improve the page, which will likely lead to higher conversions. 

Using these methods, we’re effectively sourcing test ideas and creating a backlog of hypotheses based on real user data, not guesses.

Hypothesize

“If I plug the toaster into a different outlet, then it will toast the bread.”

Once you have a list of potential improvements for your page that you gathered via researching users and analyzing page design, you’ll need to form them into good test hypotheses.

A hypothesis needs to clearly state what change you’ll be making to the page, and what KPI you expect to be improved because of the change. 

Writing a formal hypothesis may seem like an unnecessary extra step, but it makes a world of difference in keeping the focus of the test idea on what actually matters to your business. In addition, it ensures all teams and stakeholders are aligned and clearly understand the test idea, so there’s no confusion about what's being tested and why.

There’s a simple formula for a hypothesis that works really well:

“IF we [change we’re making to the page] for [users who are included in the experiment; all users or audience segment?], THEN we predict [KPI will increase or decrease] as a result.”

If there’s more data behind the hypothesis that you want to make clear in your statement, this formula is a bit more detailed:

“Because we saw [data], we predict / expect that [website change] for [user segment] will cause an increase in [key metric].”

Only once you have your formal hypothesis written out, should you move on to the actual build and run of the experiment.

Experiment

Plug the toaster into a different outlet and try toasting again.

Now, to keep things even more organized, I suggest writing up a test brief that includes all of the key information about your experiment as well as mockups of the proposed variants of the page:

  • Hypothesis
  • Background details
  • Original & Variants
  • Test start and end dates
  • Devices
  • Audiences
  • Estimated traffic per day
  • Estimated experiment run time
  • Primary objective of the test (KPI)
  • Secondary objectives

This way, you’ll build out a backlog of tests you’re going to run, and tests you’ve run in the past that you can refer to. A good test & learn program always requires remembering what has worked and what hasn’t.

Note: ALWAYS include a control group / original in your experiment. We have to base the changes of off the current experience to know if they’re working. Most a/b testing tools require a control anyway, but important to remember.

Once you’ve built the test via your A/Btesting tool of choice, do a final QA, launch the test, and wait for results!

Analyze Results

“My bread toasts!” The hypothesis was proven true.

You’ll know if your hypothesis was proven true if your experiment variant is significantly better than the original (statistically speaking). Most a/b testing tools have this analysis built in, but you’ll want to wait until 1) there’s a significant amount of experiment sessions and conversions and 2) the conversion rate lift from the variant has reached 95% confidence. 

We want to be at least 95% sure that the results we’re seeing are not due to random chance.

Just to set expectations when it comes to getting test results: even if you’ve done all your research, and formed a clear test hypothesis, most of your tests will still fall flat. It’s just the nature of the game that making a significant improvement to a (presumably) well-designed website is hard. Making these incremental changes are hard; we’re talking a few percentage point increases on the current conversion rate.

The good news is that eventually, if you’re running good tests that you’ve researched, you’ll eventually produce a winner.

That’s why it’s so important to have both test quality and velocity. You want to be launching as many quality tests as you possibly can to increase the chances you’ll make a real impact on your website. That’s why Amazon, Google, Microsoft, and many other innovative companies run thousands of tests per year.

Apply Discoveries

Use the other outlet to toast bread from now on.

Logically, once you’ve produced a winning test result, you should make the change permanent on your website. 

IF your test didn’t produce a conclusive result, you can still learn something! Did any of the variants come close to 95% confidence? Perhaps 70%, 80%? There might be an audience segment that was driving that. 

Go into your analytics dashboard and filter the experiment results for various common segments: age, gender, location, device type, returning vs. new users, ad campaign traffic, etc. If you find one that produces an exceptionally better conversion rate, you can run an iteration of the test on just this audience segment.

But even if you can’t find a test iteration idea, you still learned something. You learned what does not work on your website. You can save this result and refer to it later. The knowledge you gained even in a loss gives you and your business and greater insight into your customers and website users.

Conclusion

If you want to stop running one-off A/B tests that don’t win and don’t give you any insights, you need to start using the method outlined here.

Rather than guessing what should be tested, and how to change it, using the scientific method will help you generate better, data-driven test hypotheses that are much more likely to win and improve your website KPIs.

But even if you don’t win with a well-researched test hypothesis, you learn what doesn’t work on your website and what doesn’t resonate with your users. Hasn’t learning been the purpose of experimenting using the scientific method all along?