If you’ve researched web design, UX/UI design, or marketing, chances are you’ve heard the term A/B testing. But what does A/B testing actually mean? Today we’ll take a closer look to find out what it’s all about.
What Is A/B Testing?
Put simply, it means comparing two versions of a product to see which one performs better. A/B testing is also called “split testing” or “bucket testing,” as in, “putting things into two different buckets.” And it can be really useful in refining your design.
Why Use It?
A/B testing lets you test out a hypothesis and gather data before committing to a change, instead of doing it and just hoping for the best. On a large-scale site design or marketing project, that can save a massive amount of time and money.
How Does it Work?
The concept of A/B testing was actually refined back in the 1920s by a statistician and biologist named Ronald Fisher, who first used it with agricultural experiments. It quickly went from “what happens if I use different fertilizer on this plot of land,” to clinical trials in medicine, and to web design and marketing today.
Say you’re designing a website, and you want to see which design tweaks will make people stay longer. You’d create two versions of the page, one with the changes and one without — version A and version B. One version serves as the control, with no changes, and the other is the variation.
It usually works like this:
- Choose what you want to test.
- Show the control and variation versions to groups of people randomly.
- Track the data to show which version influenced your results the most.
Randomization is critical to this testing process, as it helps remove other variables from the equation. If you want to test the size of the subscribe button for your newsletter, for example, you’d show people the control and variation pages randomly on both desktop and mobile to keep that variable from skewing the data.
A/B testing can be done with more than two pages, but you usually use two products to start. How many people you show each version varies based on whether both versions are new, or the new version is competing against an established web page. If both are new, you’ll probably split traffic 50/50. If you’re introducing changes against an established page, it might be 60/40.
Regardless of how you decide to distribute traffic to the pages, you always show returning users the same version to maintain the integrity of the test. The test needs to run long enough to gather enough data to be statistically significant before a decision can be made. This sounds complicated, but there are free tools out there to help you plot this out.
Any element of any page can be A/B tested. Trying to get more clicks through from Google? Test multiple headlines. Trying to get people to navigate through to other pages on your site? A/B test different menu options and layouts.
Common page elements that get A/B tested are:
- Call to action (CTA) buttons like Subscribe, Sign Up, etc.
- Landing pages
Web designers can literally change one thing on a page, run an A/B test, and track the results. If something changes, they can be reasonably certain it was because of the tweak they made to the design.
Again, this concept isn’t exclusive to web design. You can A/B test different marketing emails against each other, different medications, and so on. An A/B test is the most basic kind of randomized control trial and you can use it to continuously improve the user experience. If you’re interested in learning more and possibly implementing it in your projects, go further with a deep-dive on A/B testing.