9 A/B Testing Mistakes Everyone Makes
Creating a pleasant experience for your audience online is crucial, but most website runners don't put too much thought into how they are going to go about it, let alone how they are going to test. A/B testing enables you to get a better insight into the user experience and provides you with a much higher degree of control.
Conversion optimization testing also minimizes your dependence on pure luck, because with the information you collect, you will be able to save both time and money. Once you know what works and why, you no longer need to stumble in the dark.
But, because A/B testing result can be open to interpretation, the results are not always as precise as we would like them to be. According to Mark Kennedy, who is a chief marketing officer over at BestEssays, A/B testing has helped them choose a more effective version of their website. He decided to share 9 of the most common testing mistakes, as well as ways on how to avoid them.
Mistake #1: Ending the A/B Testing Too Soon
Let's imagine you are trying to choose between two different version of your website. You proceed to test them for several days, and the first one is winning by a landslide. You do the logical thing and stop the test, and proclaim the first version a winner. Only it's not the winner. The other one is. Check out this test which was done by the people at SumoMe:

The test involved the effectiveness of three different popups. While the second one sports some pretty good numbers, 82.97% to be exact, it is the third one that is the clear winner, with 99.95%, even though it was losing early on. That is why 95% confidence you always be your goal, because then there is only a 5% chance that your choice is wrong. That is when you can end the test.
-
Mistake #2: A/B Testing Without a Hypothesis
A/B testing in order to see if an idea will work, or testing because it's what your competition is doing is not effective A/B testing. The foundation of testing is having a hypothesis. What is a hypothesis? It's when you have a pretty clear idea that something will or won't work, but you lack concrete evidence which will support that theory. For example, a certain aspect of your audience's behavior can serve as a hypothesis, and by testing it, you can learn most about it.
So, how do you choose your hypothesis? If you are using Google Analytics, heatmap software, or anything similar, you can easily choose your hypothesis based on a relevant metric. Also, data gathered from polls and surveys can also help you form your hypothesis.
Mistake #3: Disregarding Failed A/B Tests
There is case study done by CXL, which shows that you shouldn't discount a/b tests which have failed to perform right away. In their case study, they had six rounds of testing, and chose a version of the website which had done 80% better than the website they were using at that time.
During the first round, the new version of the website didn't perform much better than the old one, only 13%. The second round showed the opposite. The older version beat the new one by 21%. But, after altering some variables, and doing more tests, one variation performed 80% better than the other, so they went with that one.
Mistake #4: Running A/B Test with Overlapping Traffic
Most people stumble upon the idea of running several A/B tests at the same time. Why not? It's a great way to save a few bucks and make testing less time-consuming. But, when you have tests which involve traffic which is overlapping, it can lead you to believe that you have the right results, when they are, in fact, wrong.
If you are really pressed to do this sort of a/b testing, at least make sure that the distribution between different instances is equal. But, if you are able to, carry out a/b tests one by one, so that you wind up with the most precise results.
Mistake #5: Not Running A/B Tests Constantly
The reason why any of us are running tests is because we want to get a better insight into our audience's habits, improve conversion rates, earn more money, or whatever falls into our area of interest.
Testing should become something that you are doing constantly, because only then can you really figure how certain mechanisms works, as well as adapt to the changes in the online world. Only then will you be able to remain one step ahead of your audiences, as well as your competition.
Mistake #6: A/B Testing without Enough Traffic or Conversions
We've already covered how you need to have a hypothesis in order for your test to make sense. However, you also need a sample which is statistically significant. In other words, if you are testing traffic or conversions, there has to be enough it, because only then can you interpret the results correctly. Here is an example of another test done by CXL:
They a/b tested the two variations. The second one, after just two days of testing, had 0% chance of beating the original one. However, it needs to be said that the traffic was around 100 visits for each. They continued testing, this this with significantly larger traffic:

The results were dramatically different. The second version managed to beat the original one by 95%, which is a sign that you can end the testing.

-
Mistake #7: Changing Parameters during A/B Testing
Testing involves a wide variety of parameters, settings, metrics, goals, and variations, to which you should adhere during the testing process. There will be times when you will get a great new idea, and you will be tempted to change the parameters.
For instance, if you are testing two different pages for your website, and you want to alter the traffic split between the two, it would not necessarily impact the older users, but the new ones would cause a change in the results, leading you to an erroneous conclusion.
-
Mistake #8: Too Many Variations
While more variations, at least in theory, should result in more accurate results, there are several problems with that approach. First of all, too many variations will prolong the test, which will only cost you time and money. Consider Google's example of 41 shades of blue.
In essence, they tested 41 shades of blue to determine which one was the most popular among the users. The usual guideline of 95% confidence didn't apply here, since the likelihood for a false positive was as high as 90%, as a result of all those variations.
-
Mistake #9: Not Considering Validity Threats
So, you've done enough rounds of a/b testing and you've made sure that you've tested while having enough traffic, which means that all that is left for you to do is to choose the winning variation. But, that doesn't mean that the results are valid and correct.
You still need to go over all of the statistical parameters which are relevant to your A/B testing, and determine if any of them are producing results which are out of the ordinary. If they are, you need to look into it first, before declaring the results as accurate.
The Final Word
A/B testing is an invaluable tool if you want to make sure that your website is doing a good job. It does involve a lot of time, hard work, and some careful planning, but it definitely pays off. Not just in financial terms, but also in terms of knowledge.