Winner, winner, chicken dinner!Would you rather eat mouldy cheese or speak to someone over the phone for three hours? In today’s world, people are more likely to choose the former because we’ve all turned into hermits, living and working from home and eating mouldy cheese.
A/B testing, or split testing, is similar to a game of “Would you rather?”, except users rarely know they’re playing, and you’re not asking them to choose between disgusting and highly uncomfortable options.
(On second thought, it depends on what you’re testing…)
During this user experience (UX) experiment, users see two design versions and unknowingly help the designer pick one that performs best. You can test anything from background imagery to fonts, font sizes, CTA copy, and everything in between.
The best part? If Design A wins, you were right about your initial strategy, and you can claim a UX noddy badge (you can’t see it, but it’s there).
But A/B testing doesn’t start and end with subtle tweaks. You can compare two contrasting elements like vlogs vs blogs, photos vs illustrations, text vs infographics.
Why is A/B testing so magical?Because you can win at UX without bothering users to participate in surveys, feedback campaigns and awkward interviews. And you can learn a ton about influencing user behaviour, attitudes, and emotional connections.
A/B testing is a total adrenal gland party because you’re carrying out UX research while your product is live. It’s almost like modifying a recipe and watching anxiously as your family members eat, just to see their expressions. Complete exhilaration!
Why conduct such a precarious form of research?
Apart from the adrenaline rush, A/B testing can:
- Confirm the validity of an original or new design
- Settle conflict among design teams
- Help you make data-driven decisions
- Prove how small changes can influence user behaviour
- Provide quantitative data
- Enhance conversion
- Improve the user experience
- Give you more confidence in what works and what doesn’t
How to conduct A/B testing
- Collect existing data: You won’t know what to change if you don’t have your site analytics. Pull existing data to identify the best and worst-performing elements.
- Set #uxgoals: What do you want to achieve by making this change? More conversions? More subscribers? More clicks? Set goals to determine the most beneficial changes.
- Hippopotothise: A hypothesis is basically guesswork, but you must state your outcome so you can compete against yourself. For example, if a CTA button is black with white stripes, and you think it’ll perform better if it’s white with black stripes, write this down so you can compare it later.
- Design A and B versions: Create two versions of a design, determine your test audience size, and randomly expose half the audience to A and the other half to B.
- Set the timer: This isn’t like baking a cake – there is no set time. You must set your own timer depending on the data you need and how many users visit the webpage daily.
- Do a stick test: Analyse the user engagement metrics and compare the data to determine which version performed better. Then, either pat yourself on the back or make peace with the fact that your hippopotothis was wrong.
Rule Number 1 of A/B testingLeave the guesswork for quiz nights. The point of A/B testing is to gather quantifiable data that’ll help you enhance UX. If you’re not patient enough to wait or analyse the results, you may as well play “Would you rather?” with your friends and family.
Or you can call our InteractRDT specialists to do the A/B testing for you.
Forget “how many?”; ask “why?” Contrary to popular belief, size matters – especially when...
Thinking is for maths. Feeling is for UX. Christmas crackers are arguably the most fun - and...
Don’t be elusive. Be inclusive. If you’ve ever broken a bone or torn a ligament in your dominant...
If not your own, make your users’ jobs easier Ever heard of the jobs-to-be-done (JTBD) framework?...