In every SEO kick-off call I’ve ever had, we always have a slide that covers “SEO Expectations.” This slide explains, among other things, that while related, improving search engine rankings is secondary to increasing the quality and quality of visitors to the website.
And yet, a lot of clients seemingly glaze over this point, and within a few weeks after we upload keywords into our rankings platform, they start to obsess over rankings. They’ll nitpick on the month-over-month volatility of a small subset of keywords and attribute a drop in rankings to poor account management.
Meanwhile, we try to reiterate that these rankings shouldn’t be taken at face value but should instead serve as a single data point in a much larger picture of organic performance. With the volume of updates made to search engine algorithms and personalization factors drilling down with geo-specificity to neighborhood, volatility is not just normal, but to be expected.
In this post, we’ll describe the difference between rankings pulled from tools and what users actually see when they search. Then we’ll go over results of research around what, in real time, rankings actually look like to the end user — and what that means for SEOs looking to use rankings as a performance indicator.
Rankings vs. Reality
First off, we have to understand a key difference between rankings pulled from rankings tools and what the end users see when they search.
Rankings tools, to steer clear from geographical influences that may taint the integrity of the data, automatically set the location at the country level. So for every query we want to track for US-based clients, rankings are pulled with the location set to United States. It is, of course, very possible to manually set the location for each query. But how do you pick the locations in which to search in a way that is representative of your overall data set?
And at what point do rankings across hundreds of metros and cities becomes too noisy to accurately analyze? What costs do we incur by trying to correlate traffic trends with city/metro level rankings data? Are we willing to sacrifice time that would otherwise be used on big picture initiatives to try to understand and make sense of data that may or may not be statistically significant?
In practice, search engines (at least Google and Bing) will automatically set your search location for you based on IP address. This is the standard; to opt out, users have to manually change this within their browser settings. Every search query will pass through your geographic location, and the search engine will serve you results based on your location depending on its relevance to your query. So the data you’re seeing in your rankings tool may not always be indicative of what your end user sees.
With that in mind, it’s really quite difficult to say for sure that the standard rankings data you’re getting is 100% accurate. But how much variance is there? And what queries will have dramatically different SERPs? This is what we endeavored to find.
For those uninterested in the nitty-gritty, here are the trends we found:
- Searches for general products usually don’t change much from location to location. From something as broad as “soccer cleats” to something as specific as “Fender Classic Series '72 Telecaster Electric Guitar,” the SERPs don't really vary aside from occasionally looping in local business listings if appropriate
- As expected, searches for local service will give vastly different results. Someone searching for house cleaning services in Seattle will obviously not get the same result as someone in Birmingham.
- It seems that the happy medium between the unvarying product SERPs and the all-varying local service SERPs happens for queries in which the geographic region would dramatically impact the intent of the searcher.
We tried to vary the types of keywords we used in addition to frequently changing our location in order to try get as full of a picture as humanly possible. Locations also ranged in size, so we looked at both dense urban cities and rural, smaller communities. We understand that the small set of our keyword pool doesn’t lend our research to being statistically significant; instead, this research is an attempt to learn about trends. We are trying to show that for quite a large number of keyword types, even for some you wouldn’t expect, there is a huge variance in both rankings for your domain and also the SERP layout.
Lastly, because of constant search engine volatility, we acquiesce that you may not be able to reproduce the exact Google SERPs we found — but we believe the overarching trends still hold true.
We tested approximately 300 different types of keywords in Google, with a range of head terms to long tail queries, in clean browsers, void of search history or cookies. Each keyword was always searched for in at least two different locations; often, it would be replicated in as many as four or five cities.
In the interest of brevity, we narrowed down examples to just five keywords that had the most salient differences in SERPs and rankings.
Plastic Surgery searched in Bellevue, Washington:
The top SERP items are news stories about plastic surgery and the Jenner family. Following these three news stories are two listings for the American Society of Plastic Surgeons and Dr. Richard Rand, a local plastic surgeon. After these two results, we have actual business listings for plastic surgeons in the area.
Since the business listings are so far down on the SERP, Google treats the query for “plastic surgery” as more of an informational search than a transaction-based search in the Bellevue area.
Plastic Surgery in Manhattan, New York:
This SERP is completely inversed from Bellevue, starting with the business listings, then showing two organic listings, before going into the similar news stories we saw in Bellevue. Following the news stories, there are additional organic business results for local New York-based plastic surgeons.
In Manhattan, Google treats this query as much more of a service than an informational by leading the SERP with business listings. Perhaps a not-so-subtle indication that Google thinks Manhattaners are more interested in plastic surgery procedures than their Bellevue counterparts?
Guitar Shop in Nashville, Tennessee:
Topping the Nashville SERP are local business listings for physical guitar shops in Nashville, followed by three non-geographic dependent organic listings. The rest of the page is comprised of organic results for local guitar shops in Nashville.
This is a likely indication that Google believes Nashville users see “guitar shop” as a query to locate a local retailer where they can try out guitars rather than an item to be purchased online.
Guitar Shop in Lassen, California:
Lassen is a sparsely populated area without much of a local music scene. Even though this query was used to find local businesses in Nashville, the only results for Lassen are online shopping options because of the lack of physical guitar shops. Seems as though an up-and-coming guitarist from Lassen may have to either leave Lassen to find his or her instrument or resort to hours of online shopping.
Open Water Diver Certification in San Diego, California:
The results are all local diving schools offering courses for PADI open water diver certifications.
With a query this far down the diving interest funnel, a user doesn't just want diving classes; he/she wants a specific certification. The SERP accommodates that interest by presenting local that will likely lead to a transaction.
Open Water Diver Certification in Lubbock, Texas:
In landlocked Lubbock, Texas, this query won't take you very far. Despite a very specific long-tail search, indicative of someone who knows exactly what kind of diving certification they want, diving simply isn't accommodated by a geographic region with no oceans.
The SERP is made up entirely of Texas Dive Center, Southwest Aqua Sports (the only two diving businesses in Lubbock, neither of which offer open water certifications), and generic scuba diving sites that offer FAQ information. Rather than providing precise results for a well-read user to find the exact certification they want, these generic results are for someone casually wondering about diving courses.
Does this speak to Google’s belief that there’s a general lack of diving certification interest in Lubbock or do the geographic limitations of the city force users into generic results? It could very easily be some combination of both!
How to Become a Notary Public in Detroit, Michigan:
The top three results are government sites explaining how to become a notary in the state of Michigan. The rest of the pages are either non-geo information sites, or information about other nearby states' notary requirements (Ohio and Pennsylvania).
How to Become a Notary Public in San Francisco, California:
Four of the top five results are sites including info about becoming a notary in California specifically. The other results are non-geo specific pages about becoming a notary as well.
Since there are different requirements per state, the results are going to vary depending on which state you search from. Therefore, ranking for this query strongly depends on having geographically focused content. Thematically, however, we see SERPs that are very similar in the types of pages that are ranking even if the websites that rank differ from region to region.
Screenwriting Internships in Los Angeles, California:
As the undisputed home of the film industry, Los Angeles' SERP has a mix of local company-specific internship opportunities (Paramount, NBC, Warner Brothers) as well as links to job engines (summerinternships.com, internships.com, simplyhired.com). This SERP is set up perfectly for someone who's in the right area for these opportunities.
Screenwriting Internships in New York, New York:
Even though New York also has a big film scene, its SERP isn't as fine-tuned as the Los Angeles SERP. We see more links to job engines instead of local companies (indeed.com, entertainmentcareers.net, internmatch.com, etc.).
At the bottom of the page, where Los Angeles had Warner Brothers and NBC company sites, New York actually has questions about internships posted on Reddit and aspiringtvwriter.com. Even though interest is high in New York, its opportunity doesn't match that of LA. Rather than getting links to companies nearby offering opportunities like Paramount, New York searchers are getting links on how to find those opportunities; in other words, a query for a query.
Are Rankings Obsolete?
No. But they don’t have a linear relationship with performance, either.
Rankings have always been a secondary KPI and should remain so, especially as SERPs continue to diversify across markets. What looks stable today may not stay that way as search engine algorithms become better at parsing intent from keywords — which may mean that even product-based queries (searching for “boots” in January may turn up different results in Minneapolis than L.A.) start showing more and more variance.
You can get truly refined keyword data today, to be sure: you can say with certainty that performance in X cities/metros are doing better, while Y cities/metros are doing worse. SEOs can try to correlate that with traffic trends by cities/metros to try to make sense of the rankings data — but with a ton of effort, and to what effect?
Knowing that keyword-level data is all but gone thanks to Not Provided and that rankings information is fairly unreliable and sometimes just inaccurate, how do we proceed? Rather than reverence of SERPs as an indicator and predictor of SEO success, we suggest (as we do for our clients) a focus on traffic and conversion rate, with an eye towards breaking out landing pages based on conversion potential. Visibly sexy in QBRs? Maybe not, but increased revenue usually wins the board meetings.
Have you shifted your focus to traffic and conversions? Let us know in the comments!