SEOs are blessed with more tools and data than ever. We have the ability to measure, track and report on so much of what we do—but are we using this to the fullest? I’d argue no, and more than that I’d suggest that we—as an industry—have not adopted an SEO testing mindset that helps us to further our understanding of the impact of changes and truly improve and iterate.
A testing mindset is one that doesn’t assume a change will be positive or negative. Rather, it looks to data to explain the outcome. An SEO with a testing mindset doesn’t test once and assume that it is “done”. A person with a testing mindset is one that expects to be proven wrong, it prepares for that eventuality, and reacts appropriately when it happens.
But what does this all mean in reality? Read on!
SEO Truisms
First, there are some basic SEO truths you must know. This is not just for testing, but making (and monitoring) any kind of changes in SEO.
- Every change you make has the potential to impact performance
- Not every change will have a meaningful or noticeable impact on performance
- Other factors are also constantly impacting performance—algorithm updates, competitor activity, and even the crawling & indexing of our own websites.
- We do not have a full understanding of the timing or magnitude of changes behind point 3
The last two points make a lot of the predictions much more unreliable. However, that doesn’t mean you shouldn’t be exploring and testing certain changes. In fact, it means that you should be doing it more frequently and more studiously.
You can make a strong hypothesis (more on that later), but the chances that a non-related factor could have an equal or greater (and opposite effect) at the same time are not zero. So, even if your idea is good, it’s possible that the results wouldn’t show that after only one test.
What are the odds of that happening? Well, relatively low, for sure, but that is not something that you can just ignore if you are serious about mastering SEO.
Being Clear on What It Is You’re Testing
In testing, size matters. The size/scale of the change will heavily impact your ability to detect any potential benefit. That doesn’t mean that small changes can’t give you significant results, but testing small elements is not very likely to give you enough valuable data.
Are you deploying a handful of new title tags or migrating domains? You can test for either, but monitoring those two tests will profoundly differ.
If you are testing for risk mitigation, you want to gather as much knowledge as possible. So, it is better to test those more sizable changes for two reasons.
- The effects are easier to measure
- When you actually have to make those changes—they are much riskier
Just think about it, blindly migrating a domain without test data can severely negatively impact your website’s performance.
Furthermore, small, highly isolated changes will sometimes be harder to control. If you were to change a title tag and notice the rankings you were tracking increased, would that be conclusive? Not likely.
But, does that mean that you shouldn’t test them? Not at all. You will just need to replicate the results multiple times to get accurate data.
A/B Testing (more below), if planned correctly, is the best method of controlling for some external factors that make testing data hard to interpret. But, if in doubt, change as few things as possible and then try to replicate the test results again and again.
SEO Testing Methods
The barrier to adopting a testing mindset is low, very low. Even conducting sophisticated tests has never been easier.
Below are some of the key ways to focus on this.
- Tracking/annotating deployments against key metrics—the lowest bar to testing, and still something a lot of people aren’t doing. As a bare minimum, you need to annotate your visibility data, Google Analytics, OR a separate workflow spreadsheet to highlight when a change was made. The results on that day/week/month provide a benchmark and you can revert back to this in the future.
- Time-based testing—time-based testing doesn’t have to require any more significant tools/overhead if you already have Google Search Console or web analytics package setup. You effectively compare the period before the test against the period following the test (OR run year-on-year comparisons). This doesn’t account well for externalities, but if post-testing is different from a like-for-like period, this can teach you something.
- A/B (split testing)—A/B testing is the gold standard for testing, but will likely need additional overhead or planning. In this scenario, you present Google with two different versions of a page template (across test/control buckets), and let it index and rank the difference. Read more on the process here.
Tracking Tests & Metrics to Report on
Obviously, we all want to track uplift in revenue as a direct result of split testing, but only setting these as testing metrics is going to cause various problems.
You need a series of metrics and understand how they impact each other and are impacted by the test.
- Impressions—Whilst a partial vanity metric, impressions are a leading indicator of clicks. Even if the test didn’t drive a significant increase in clicks/traffic, it could be suggestive of an increase in rankings.
- Clicks—An increase in clicks is obviously a key metric, measuring this is your most-important
- CTR (Click-through rate)—CTR can be useful, but needs to be heavily caveated. If you’re tracking SERP features then understanding the CTR changes can be key.
- Rankings—Rankings are another leading indicator of performance. If you see an increase in rankings (or visibility score from rankings) this is a positive side. A mistake people often make is assuming rankings have a direct relationship with traffic. If you’re not on page one, they don’t but that doesn’t mean the test hasn’t worked!
- (Organic) traffic—This is similar (but not the same!) as clicks, and is likely one of the metrics the business will value the most. If you can prove an increase here, then that’s going to be the strongest endorsement of the test.
- (Organic) revenue—This is the prize of the metric set, but again the relationship between rankings and revenue is not as interrelated as people like to think. If you’re driving clicks/traffic but conversions aren’t happening because the stock is low (for example) or the page speed is poor you won’t see the deserved increase in revenue.
You do need to give a sense of proportionality/importance to these metrics. Some are clearly more useful to the business than others. However, those goals depend on what you want, so we can’t tell you how to weigh each individual metric.
What we can say is that if you only see an uplift in impressions as a result of a test, then the benefit is relatively low. But, if you can track that your sales went up, then you have something serious to celebrate.
Creating a Testing Hypothesis
There are many ways of drawing a hypothesis for testing, but changing filling in the blanks on the following statement is a great starting point.
If I [make this change] it will [positively/negatively] impact [this/these metrics] by [x%] on [y pages] because _________
This may feel like a lot to write out for each change, but in doing so, it will force you to do the following:
- Ensure you know exactly what the change is & which pages it will impact
- Whether you think the change will be beneficial or not—you can do this for changes you think will be negative(!)
- You have to plan in the primary method of measurement
- You have to substantiate the impact—some additional research may be required to do so credibly
- Finally, you need to demonstrate why you think this is the case.
You still need to collect and analyze the data to fully draw meaningful conclusions on this, but the process itself will make your actions more considered and the results easier to report/reflect on.
SEO Testing Lessons
Even with a testing mindset, SEO is a complicated thing. On the face of it, testing offers some answers to some of the “big” questions, but this doesn’t mean you can be complacent/lazy with what testing will provide you.
- Tests are time-consuming to run and the data can easily be inconclusive. So plan tests properly and get buy-in first.
- Testing to “settle an argument” is not a great use of anyone’s time and a positive result does not always get the attention you believe it deserves.
- A negative test can provide a lot of learnings
- The less traffic you have, the longer tests will take to run—in some cases ruling out the usefulness of the results
- If you do not receive a lot of non-brand traffic, the impact of “brand” activities may easily override any gains/losses made in testing.
- Seasonality needs to be factored in if you are not running split tests—i.e. time-based testing.
- A test gives you the result of making a given change, in a given environment at a given time. Do not assume these findings are portable to every other scenario out there and expect the same results.
So what are you waiting for? Draw up your hypothesis and start creating a framework to test your last SEO changes by. If you’re already seasoned at this, maybe it is time to consider more advanced testing methods to prepare the business cases for larger, more meaningful tests in your business.