Most companies nowadays rely on A/B testing in order to achieve growth. But what is A/B testing, really?

According to Wikipedia, “A/B testing is a way to compare two versions of a single variable, typically by testing a subject’s response to variant A against variant B, and determining which of the two variants is more effective”.

So, when it comes to growth, A/B testing is the most reliable way to determine exactly what will work for a business, using correct data.

Of course, A/B testing-much like any validation tool-needs to be used correctly, otherwise it won’t really work. So, if you’ve tried using it and you’ve reached an impasse, then perhaps you did something wrong.

You tried to test too many items in one go

Let’s assume that you wanted to optimize a landing page. You went ahead and changed your copy, your CTAs, the pictures on the page, all that just to save some precious time. Well, that was not a very good move.

Seeing as the objective of A/B testing is to make decisions based on data in order to achieve growth, you have set yourself up, as you cannot determine which change is responsible for the results you got.

That’s why more and more SaaS tools like Elementor’s WordPress theme builder, Unbounce Landing Page Builder, as well as email marketing software platforms like Moosend and Mailchimp integrate A/B testing within their products.

Your sample was too small

It doesn’t take an expert to understand that, while 95% is a strong percentage, the bigger the sample, the more valid the results.

There is a huge difference between 90% of ten people and 90% of 1000 people, even if we’re still talking about a percentage of 90%.

If your website doesn’t gather the desired traffic or doesn’t score enough sales, chances are that A/B testing won’t work for you yet, seeing as the best option may seem to be the best, but your data won’t exactly be valid.

You didn’t get to know your target-audience first

Too many businesses rush to get into the A/B testing game and forget one key aspect: research on their target audience.

What is their behavior? What do they like? What makes them tick?

Make sure that you are specific when it comes to your prospects.

For example, not all travelers have the same traveling style.

Two mothers will, in theory, search for the same travel hacks, but what happens when one is the mother of a toddler and the other one is the mother of a moody sixteen-year-old?

Since the decisions need to be data-driven, the first thing to do is to get the right data, then design tests and extract the correct data from those tests.

So, next time consider running a survey first and an A/B test later. You’ll get far more accurate data that way.

You run the test for all the wrong reasons

This may sound crazy, but it really is not.

There is a high chance that you just didn’t have the correct hypothesis-or maybe that you didn’t have a hypothesis at all.

Let’s take the traveling example a bit further and assume that your A/B testing had to do with a promotional email on some holiday offers.  

Make sure that you asked yourself the following questions and monitor the following KPIs:

  • How many people actually opened your email?
  • How many people made use of your website?
  • What would you change, based on your analytics’ data?

Your hypothesis-and, of course, the A/B testing of said hypothesis-will give you insight on what to change.

Maybe you needed better content, maybe you packed your email with CTAs, maybe your cold email was a bit too cold or maybe your subject line didn’t work as well as you would’ve wanted.

You were impatient

How long did you test your hypothesis for? As we’ve already mentioned, your results are only as good as your data.

If you run your tests for only a week, then you are not really done testing. Percentages differ not just day-to-day, they can change any minute.

Therefore, you’ll need more than a couple of Monday-to-Friday percentages to ensure the validity of your data.

But we’re not done here. Did you take into account the time period you tested for?

Let’s use the holiday offer email again. This email will reach an all-time high during pre-high seasons like April or November when everyone is trying to book their summer or Christmas holidays respectively.

What do you think would happen if you decided to run tests during the first week of October?

Wouldn’t the test results be a lot more accurate if you decided to take your time and test your hypothesis during the last couple of weeks of October and the first couple of weeks of November?

Conclusion

Of course, these are not the only factors that may make your A/B testing fail.

Maybe your timing was not exactly the best or maybe you ran one too many tests, maybe your growth marketing strategy was a wee bit off in general.

But this is nothing to worry about.

What mistakes did you make when you first started?

What insight did your A/B testing data give you?

Let us know in the comments!

Posted by Miley

Leave a reply

Your email address will not be published. Required fields are marked *