There are no hard-and-fast rules when it comes to SEO. Each niche is different, and how Google understands intent changes based on a number of factors. In order to win in search, SEOs need to stop following trends and start doing more testing and research themselves. This is where split-testing comes in.
Why statistical SEO split-testing is a must:
The concept of A/B testing is nothing new to those in the digital marketing space. For years, tools like Hotjar, Google Optimize and Crazy Egg have been helping companies increase conversions. But all of these tools focus on what happens after a user lands on your site.
In SEO, our first objective is more organic traffic and visibility. We are trying to influence conversions within the search results instead of our own website pages. This presents us with obstacles that traditional A/B testing tools can’t help us with.
As search becomes more semantic and contextual, we must change our approach. What works in one niche may have no impact on another. If we want to truly find those winning strategies, we must run our own experiments. If you want to learn more about how Semrush is helping solve this complex problem, check out this article.
How does statistical split-testing affect SEO?
Running your own experiments will unleash your creativity like nothing else. Instead of doing the same old tasks and wondering why you’re not seeing larger gains, you’ll become an asset that drives real-world results. Here’s how.
Split-testing allows SEO to understand what the user and Google expect for a particular topic.
We see the concept of user intent everywhere today. It’s something that Google themselves have been obsessed with for quite some time. The question for many in the world of search marketing is, “how do we uncover intent?”
When running split tests, you can uncover intent by testing Title Tags to see what the user and Google prefer. Or you can test how the structure of a page improves dwell time and engagement and its impact on search. These two simple ideas could unlock deeper insight into what the user expects and what Google is looking to deliver within the SERPs.
Split-testing gives you real-world feedback
Reading research done by others in our field is great, but real learning happens when you start practicing yourself. Split-testing allows you to get real-world feedback and helps you better understand what does and does not work.
Split-testing helps direct and align your strategy
Too many people focus on tactics instead of outcomes. With so much competition in many industries today, we must have customized strategies based on outcomes. Split-testing helps you focus only on the things that allow you to research your intended goals. Whether you are in-house or in an agency, being able to back your strategy with real data will increase trust from outside stakeholders.
Understanding SEO A/B testing results
Up to this point, we have discussed the importance of SEO split-testing and how it impacts your SEO work. But what do you do with the actual results? Most people believe their tests are only successful when their hypothesis is proven correct. This isn’t a great place to start from.
When testing a hypothesis, we must take an unbiased scientific approach. That is the only way we can confidently and accurately analyze our results.
When it comes to split-testing, all tests, positive and negative, are successful tests.
The goal of SEO split-testing is to understand what works and what doesn't. Plain and simple. Unfortunately, many SEO strategies are built on biases. We trust what we “know” to be true instead of trusting what the results say. This is hard, I get it, but when we push our ego aside, we unleash the ability to learn something new.
Positive and negative results are equally important
We recently ran a number of split tests for a client of ours to see what adjustments we could make to increase the visibility of a user-generated content section of the site. They have millions of users that create tons of content that is indexed by the search engines. While these pages generate traffic, we have less control over the content due to it being user-generated. Thankfully, we were able to leverage SplitSignal to run some tests.
Test #1 — Header Optimization
In the first test, we decided to change the way the headers were tagged on the page. The developers were using the H-tags primarily for CSS styling, but they were a mess and didn’t provide the hierarchical structure that headers are supposed to be used for.
We hypothesized that by fixing this structure and changing the header tags, we would see improved search results.
As you can see in the results, our hypothesis was proven correct and led to improved crawling and ranking of these pages.
Test 2 — Content Restructuring
With the success of test 1, we wanted to see what else we could do with these pages to improve their results. This time, we decided to restructure the layout of the page and shift a section of content.
The content box we moved had a call to action in it, and we hypothesized that moving this box would increase engagement on the page and indirectly improve the performance in search.
The second test was clearly negative. So does that mean it was a failure? Not at all. Now we clearly know what not to do on these pages. In fact, in many ways, this test was more successful than the positive test we had with #1.
Without split-testing, you could end up taking big risks without understanding the consequences.
Putting results to action
The end goal of any test should be action. We don’t run these simply for curiosity’s sake. We also want to know what works. I am a huge fan of the Pareto Principle. This is the simple mathematical rule that roughly describes the law of distribution, “80% of consequences come from 20% of the causes.” Testing allows you to apply this to your SEO work. You can uncover the 20% that’s generating most of your results so that you can focus more on that.
The other benefit is that you can stop doing what isn’t working. How many times have you done tasks that you felt weren't worth the effort but spent all day doing them anyway? Testing helps you prove what doesn’t work so that you can focus only on tasks that deliver results. Time is the only resource that is truly limited.
The only way to continually improve is to make testing part of your culture. While it may take a little more time upfront, the results on the backend will be massive. As the search continues to evolve, you'll have the skills and data to stay ahead.
Ready to learn from your results, negative or positive? Request a pilot with SplitSignal.