Ever had the desire to change something on your website, but you were afraid the change might have a negative impact on performance? Welcome to A/B Split testing, the practice of testing multiple variations of the same site to see which works better.
What is split testing?
A/B split testing is a technique to find which changes really improve your website and which changes don’t.
Let me give you an easy example: You want to try a much bigger “add-to-cart” button on a product page.
You create 2 versions of the page—you call the old version “A” and the new version “B”.
You then use special software to randomly show site visitors the 2 different versions, and you measure which version works best (ie: which results in more conversions or higher performance). You are testing the new version with real people to see if it works in the real world. You then stop the test and go ahead with just the winning version.
Why not test A for a while then B?
Just look at any graph of your conversion rates over time—it’s all over the place. Some months can be 20% better than the previous month—then it gets worse again—then better, etc…
Conversion rates are affected by season of the year, sources of traffic, news events, the state of the economy, competitor activity… You’ll see big differences and big swings even with no changes at all to your own site.
So if you tried A in a good month then tried B in a bad month you could make an incorrect decision. And you don’t want to freeze all other changes while you try the A and B buttons.
Split testing aims to eliminate these other differences and see if button B really is better than button A. You’re testing the two versions at the same time in the same season with similar visitors.
How long do I run the test for?
This depends on how much traffic your site is getting, and also on how big the improvement is.
It’s very important not to make any premature decisions. On a low traffic site a rule of thumb is to wait until you have over 30 conversions from A plus B. On a high traffic site you should wait at least 24 hours and try to include some weekend time as well.
You’ve really got to keep an open mind about the results—so often I see “obvious” or “common sense” changes that just make no difference or actually make conversion rates worse.
If you still have no measurable effects after 6 weeks then just stop the tests and go with what you like best.
Is it just buttons?
You can test any element on your site: different photos, different headlines, different copy. Even simply moving different elements around can have drastic effects on performance! Test a contact form on the right of the page against the same form on the left and you might get double the number of messages sent from it.
You can also test different checkout sequences: maybe a 3 page sequence versus a 2-step checkout process.
Or a different offer: Or describing one offer in 2 different ways.
One client was discounting $33 off a $100 dollar product. She found “save $33″ was way way better than “save 33%”. Your mileage may vary, of course. You need to do your own test with your own visitors.
And it’s not just 2 variants A and B—you can create as many as you want and do an A/B/C/D split test.
You can even test different prices: Visitor A does not know what price visitor B is seeing. Amazon got into trouble in September 2000 for doing this—but a lower profile e-tailer could probably pull it off and find their own sweet spot for maximum profit (not to encourage bad practice. Always check to make sure you’re not breaking any laws obviously)
I have even split tested the idea of hiding the product price.
Does it have to be 50/50?
You can vary the proportions. It could be 90 / 10 or any other ratio. Splitting the traffic 50/50 is the quickest way to the end point.
Some changes may be too high-risk to try out on half your visitors. I’m thinking here about wacky stuff like having music or a voice-over when the visitor arrives on your site. Or something that doesn’t scale very well, like a personal chat between the site visitor and a doctor. Try on a smaller sample first.
And don’t forget that you can test different elements of your site at the same time—you don’t need to finish one test before you start the next. This is one reason why the splitting is done randomly and not just showing alternate visitors the different versions.
Who is doing split testing?
The big websites all do A/B split testing: Amazon, eBay, Lands-End, Boden in the UK. If you look at a Google results page, the pale yellow background on the sponsored (PPC) results was chosen by a split test of dozens of colours. Pale yellow got more clicks.
Can a small company do split testing?
For a small business you need to balance the effort and cost of setting up a test against the rewards from improved conversions. Most small business sites can get huge improvements just by applying best-practice across the board.
After doing that you can start testing. Start with things that could have maximum impact and lots of traffic – for the minimum effort. One of the easiest ways for a small business to test variations is with Google AdWords. You set up 3 variants of an ad. Rotate all 3 for some time then suspend the worst – and then create a new contender. I’ve blogged about improving AdWords this way. You will also learn what words work for your visitors and you can then apply this knowledge on your site itself.
How do I get started?
You need to choose – or make – a tool. The tool will randomly split new traffic, and remember who has seen which version so that people get a consistent view as they move around your site. And it needs to track the visitors through to a goal – like your checkout page or a thanks page that she sees after a form submission.
One tool that might do the job for you is the Google Website Optimizer.
What should I test first?
Choose an element of your site that’s high in impact and also controversial.
A good example is product viewing widgets on an e-commerce site. These widgets are nice for some people—the site visitor can spin the product round, zoom in and out, really experience your stuff.
But they can also be slow to download and difficult to use for other visitors. Rather than debating the pros and cons or listening to the widget’s vendor, why not test the widget in action with your own real visitors.
Set it up on your 10 highest-traffic products. Half of your visitors will see the widget and the other half will see just a plain old photo of the same product.
Let the people decide!
Have you tried split testing in the past? What tools do you use to get the job done, and what is factors play into your decision making process for implementing changes? Share your ideas below!