9 tips to make your A/B testing more effective

6 minute read
9 tips to make your A/B testing more effective

A key component of successful digital marketing is a combination of experimentation and analysis.

But how do you know which subject line approach or visual in an email will be more successful with your audience?

The answer is A/B testing, which is the process of showing two variants of any particular element to different segments of your audience at the same time and comparing which variant is more successful. A/B testing is also known as split testing and can be used in many different ways on websites, emails, and so on.

The following are nine tips to help you get the most out of your A/B testing efforts.

Decide what your ‘success metric’ is for A/B testing

This matters before doing any kind of testing, much less A/B testing. It also can vary among businesses because it’s tied to what your goals are.

For example, when sending an email, do you care more about how many open that email or how many click on some link in that email? There’s no right or wrong answer. It just depends on what matters to you. 

Let’s say you care more about email open rate. You then should test your subject line. If you care more about the click rate, experimenting (and testing) either the design or the content (but not both at the same time) is relevant. 

Determining your “success metric” in any aspect of digital marketing can keep you on track not only with your strategy but your A/B testing (and analysis) as well.

Make no assumptions

It’s really important to keep an open mind when A/B testing. True, there are definitely some assumptions that must be made to narrow down your testing variants. 

This means that the variants of an email subject line, for example, likely follow your typical brand voice’s tone and have been narrowed down by what sounds engaging to you. However, whether you have an absolute favorite or not, you have to use the variants in your A/B testing to determine what is the “favorite” of your audience (not just proving what you like the most).

Otherwise, what’s the point? Often, A/B testing results can very much surprise us, so we have to keep an open mind throughout.

Don’t even rely on others’ case studies as you’re researching what can be done. What has worked for others may not work for you.

Keep your A/B testing focused

A/B testing is powerful, but don’t let that get you into the mindset that every single thing has to be A/B tested all at once.

Stay focused. 

You should, however, make a list of everything you’d like to test, and then prioritize what matters the most to you. Slowly work your way down the list one at a time since great A/B testing needs some time and data analysis to understand the results. Take breaks when you need to and let each test speak for itself.

But it’s critical that you only test one variable at a time in any one campaign. Complicated results are the last thing you need from A/B testing.

Decide on your statistical significance

This is an aspect of A/B testing that, while important, can get confusing for businesses.

The higher the percentage of your confidence, the more sure you can be about your results. In other words, what odds would you be comfortable making a bet on?

It’s recommended to aim for a statistical significance of at least 95 percent, especially if the A/B testing was time-intensive to complete.

In addition, a general rule of thumb is that you’ll want a higher confidence level when testing for something that only slightly improves your success metric because random variance is more likely to be a factor.

So, if something can be considered a more radical change, you might not need a higher statistical significance, but the more specific the change, the more scientific you’ll want to be.

Still not sure? Check out Visual Website Optimizer’s statistical significance tool.

Create a ‘control’ and a ‘challenger’ to test

You’ll want an unaltered version of whatever you’re A/B testing as your “control” variable. For example, if you’re testing a landing page, this would be the version (in both design and copy) of that page that you would normally use.

Then, you’ll want to build a variation of that, a “challenger,” to test against your control. Resist the temptation to change multiple features of the landing page from our example. Perhaps you just want to change the colors or the page or something else. 

Keep it the change with your challenger limited during your A/B testing. Otherwise, you won’t be able to determine what changes led to what results.

Test your variations at the same time

Timing is a big part of A/B testing. Whether you’re testing at a particular time of day, day of the week, etc., it all matters.

But even more important is that you’re testing your two variants at the same time.

You don’t want to test one this month and the other next month. That approach leaves too much guesswork as to other potential variables that have come into play and will muddy your results.

An exception to this best practice is if you’re testing timing itself, such as the time of day of an email being sent. Naturally, you would need to send it at two different times to test this variable.

Don’t make any mid-test changes

This is one of those temptations you’ll have to resist. Maybe you’re already excited about some of the results coming in. Maybe there is something else you’d like to change unrelated to your A/B testing.

Either way, don’t make any changes during your A/B testing.

Doing so will very easily complicate your results and reduce any significant takeaways.

Determine your sample size

Your sample size can easily vary depending on the tool you’re using or the type of A/B testing you’re doing.

For example, if you’re A/B testing an email, you’ll likely want to send it to a smaller portion of your database. Then, once a winner is determined, that variation can then be sent out to the rest of your contacts.

And if you’re testing an element of your website, you really can only control the length of time you’re running the test in order to reach a desired level of testing.

In general, though, try not to limit the sample size too much, or your results will not be statistically significant. In other words, if you don’t test on enough people, you won’t get reliable results. 

Still not sure? Check out A/B Tasty’s sample size calculator.

Document your A/B testing (and results)

This doesn’t happen as much as it should. However, if you can create a documentation process for all to follow and stick with it, you will:

  • Avoid repeating tests you’ve already done.
  • Educate your team members about results.
  • Have a library of data results to help make future decisions.

In conclusion

Remember that when starting A/B testing to keep things simple. Start small. See how it goes and build from there. 

There are a number of A/B tests you can do. Prioritize your list and go from there.

As you’re exploring A/B testing as part of your digital marketing strategy, consider leveling up your digital marketing with DailyStory. Features include automating various marketing tasks, personalization, dynamic audience segmentation and so much more. Start for free with us today.

Want to receive more great content like this for free?

Subscribe to our newsletter to get best practices, recommendations, and tips for digital marketers