Voice Search Study: Factors Influencing Search Engine Rankings in 2019

Olga Andrienko

Jun 05, 201911 min read
Voice Search Study

To say voice search is the new buzz term in SEO would be somewhat of an understatement. It almost seems as if every digital marketer has it on their radar, with the industry as a whole still trying to formulate how best to exploit this fledgling technology. One thing that is indisputable is that voice search is here to stay, given that there are reported to be 118 million smart speakers now operating in US homes.

It is believed that today, two in five adults now use voice search at least once per day, and by 2020 it is estimated by many experts that half of all searches will have shifted from the keyboard to the microphone.

Google revealed that 20% of searches through the Android Google App are now voice searches, and with the company recently announcing that its new version of Google Assistant (due to be released later in the year) it will be 10 times faster than its current version. That combined with the fact that voice search recognition is now at a point of 95% accuracy, it only makes sense that as consumers, we are moving towards the technology more and more.

Many experts have made predictions on how the market will develop over the coming years, with various percentages wowing us with the millions of potential dollars it could all be worth in the future. And while we all digest these predictions on the future market value of voice search as a whole, one thing is for sure — it can’t be ignored by those working in search marketing. The search landscape is shifting whether we like it or not, which means marketers need to ensure they are keeping fully abreast of how this change will impact the performance of their campaigns.

Voice Search in 2019: Our Study

With voice search so hotly discussed and debated this year, we decided to produce a study of our own.

Last year Backlinko produced an in-depth study into the ranking factors behind voice search, with some interesting findings returned. That inspired us to look into this further and understand which ranking factors are continuing to be key, as well as exploring exactly how voice search is continuing to evolve.

Our comprehensive study was lead with two explicit objectives:

  1. To understand the parameters that Google Assistant uses to select answers to voice search queries.

  2. To compare and understand differences in answers obtained from different devices.

We made it our mission to find out the most essential ranking factors behind voice search as well as uncovering what influences Google Assistant to choose one answer over the rest of the results in SERPs.

Methodology

Our findings are the product of an in-depth analysis of over 50,000 questions asked to three devices combined. Using queries pulled from SEMrush’s API (as well as a series of automated voice queries), we recorded the SERPs from each query and then analyzed several different factors including the readability, page speed, number of backlinks, and SERP features (amongst others) to determine which factors are the most influential when it comes to ranking for voice search queries.

The same process and analysis were then carried out on three separate devices (Google Home, Google Home Mini and the Xiaomi Redmi 6) using Google Assistant; with all three set to the same location to get a conclusive answer.

SEMrush voice search study analysis was carried out on 3 separate devices: Google Home, Google Home Mini and the Xiaomi Redmi 6

You can view full details on the methods behind the study in our Methodology report. You can find this by clicking here.

The Key Findings

There were several key findings that were either consistent across the three devices, or that clearly defined the differences between using voice search through a speaker and using voice search through an Android smartphone.

Below are the key findings from our research:

  1. Close to 80% of the answers returned were from the top three organic results (for Android Phones, 72%)

  2. 70% of all answers returned from voice searches occupied a SERP feature (with 60% of those returning a Featured Snippet result)
  3. When analyzing backlinks, Page Score and Trust Score were slightly higher for answers’ URLs regardless of the device.
  4. Backlink anchors and keywords within a title matching the voice search query are present in over half of answer URLs for Google Home and Home Mini.
  5. Text length of the answers returned was nearly the same for every device (around 41 words on average).
  6. Text complexity needs to be simple and understandable for the average reader (ranking around 8 on the Flesch Kincaid Grade.
  7. Pagespeed is very important for all devices — for a majority of questions, the answer chosen by Google loads faster than the average page speed for all other results in the same SERP.
  8. Well-linked pages (internally and externally) are favored within Google Home and Home Mini searches.
  9. Over a third of the answers do not use schema. Different schemas are used, with Article and Organization being the most popular, with low percentages. In non-answers, the use of schema is more prominent, but still no single type dominates.
  10. HTTPS and URL depth seem to be irrelevant for Google Assistant's selection (because there was no tangible variance between answers and non-answers).

An In-Depth Review Of Voice Search Findings

One thing we discovered almost immediately was that 97% of answers provided by Google Assistant are results that rank in the top 10 organic results. Therefore an existing first-page ranking is nearly a prerequisite for ranking for voice search queries.

We split our findings into seven key areas:

Average Word Count

Once all the questions were gathered (both manually and by using SEMrush’s API), they were then constructed and curated using three separate methods. Common questions were among those selected, such as “What should I see in Portugal?”, while questions were also created from lists (as well as general questions using popular keywords).

We found that the average word count of an answer from voice search was 41.4, with similar word counts reported across all three devices:

SEMrush voice search study. The average word count of an answer from voice search is 41.4. The Android smartphone has the highest average word count at 43. words. Google Home and Google Home Mini voice search results have an average word count of 41.4 and 42 respectively.

  • The Android smartphone had the highest average word count at 43 words. Perhaps the addition of a screen slightly increased this number given that users can read the text while also listening to the audio provided Google Assistant.

  • Google Home and Google Home Mini voice search results have an average word count of 41.4 and 42 respectively, suggesting that while the presence of a screen may increase word count marginally, it isn’t a hugely influential factor in producing results across the board.

Rankings

The Android device delivered 93% of answers from the first page of organic results. However, that is still lower than Google Home and its Mini counterpart, both of which delivered 98% of answers from the first page (with over 40% being position one in Google).

Position Google Home Answers [%] Google Home Mini Answers [%] Android Phone Answers [%] Overall [%]
1 41.1 40.4 34.1 39.9
2 23.6 23.6 22.7 23.5
3 13.7 13.6 14.9 13.8
4 8.4 8.7 9.7 8.6
5 5.6 5.5 6.6 5.6
6 2.8 2.9 4.2 3.1
7 1.8 2.1 4.1 2.3
8 1.5 1.5 1.3 1.5
9 0.9 1.1 1.7 1.1
10 0.6 0.6 0.7 0.6
Answer in top 10 organic results [%] 98.1 97.9 92.7 97.4
         

Overall across all three devices, around a quarter of answers were ranked in position two, while on average only 14.2% of results resided outside of the top four positions.

SERP Features

Alongside 78% of voice search results ranking in the top three, the majority of queries returned an answer occupying a SERP feature result. In total, 68.5% of answers came from a SERP feature, with Featured Snippets being the most commonly found in Google Home and Google Home Mini, in particular.

Over 60% of results from Google Home were ranking as Featured Snippets in all, a significantly higher number than compared to Android (41%).

In a number of cases, we found answers from the home speaker devices (Google Home and HomeMini) were returning results from the Featured Snippet position. However, when the same queries were conducted using the Android device, the answers produced was not taken from a SERP feature. Example queries included….“Which country is the largest producer of Apple?” and “What Should I See in Portugal”.

SEMrush voice search study. The 68.5% of queries returned an answer occupying a SERP feature result. Featured Snippets are the most commonly found in Google Home (61.2%) and Google Home Mini (56.3%) Number of results from Android ranking as Featured Snippets is significantly lower - 40.9%. Around one in five Android voice search queries with SERP features returned answers ranking in a People Also Ask position, while just over one in 10 of searches made via Google Home and Home Mini produce the same result.

Interestingly, while the number of Featured Snippet results within Android are 20% lower than Google Home devices, there was an increase in the number of results occupying a People Also Ask feature.

Around one in five Android voice search queries with SERP features returned answers ranking in a People Also Ask position, while just over one in 10 of searches made via Google Home and Home Mini produce the same result.

There were also many queries where Google Home and Home Mini delivered a Featured Snippet result, but the People Always Ask result was delivered on the Android device; this was the case for results including a number of searches revolving around “which country” questions.

In terms of other SERP features, they were almost non-existent within voice search answers, corroborating the fact that voice search answers come mainly from high-ranking organic results in the Featured Snippet position.

Readability

Sometimes overlooked in SEO is the readability score of the page you are trying to rank. Of course, you can cram a page with target keywords, but what search engines are looking for is well structured, well-written content that ultimately matches the intent of the query.

Within our study, we used a range of different metrics and tools to analyze the readability of results and found that, in general, voice search answers are simple to understand, with the average 15-year-old able to comprehend the answer returned.

Flrsch-Kincaid Grade Level

We ran all answers through the Dale Chall readability test as well as the Flesch Kincaid Grade system, finding in the case of the latter, 80% of Americans would be able to understand/read the applicable answers.

This trend proved consistent across the board, with the only noticeable difference between the three devices being the fact that Android results generally had a lower difficulty of words, perhaps due to answers being both visual and audible.

  Google Home Google Home Mini Android Phone
Automated readability index 10.4 10.3 11.0
Coleman Liau index 10.3 10.3 11.0
Dale Chall readability score 8.1 8.1 8.5
Difficult words 9.3 9.2 8.4
Flesch Kincaid grade 8.2 8.1 8.8
Flesch reading ease 62.2 62.6 56.0
Gunning fog 15.4 15.2 16.0
Linsear write formula 7.7 7.8 8.1
Smog index 6.0 6.1 5.5
Text standard 5.4 5.4 4.8
       

Page Speed

Google has long signaled that page speed is one of the key factors within its ranking algorithm and that pages with faster loading times have a distinct advantage over competing pages that load less quickly.

Of course, a wide range of queries will have a wide range of loading times, given that different types of content could be returned (videos, images, text, etc.). However, all answers across all devices were consistent in having quicker page load time compared to the non-answers in each SERP (on average).

Below you will find the percentage of voice search answers that perform more quickly than the remainder of the top 10 in the SERP for specific key metrics.

Metric Google Home [%] Google Home Mini [%] Android Phone [%] Overall [%]
Estimated Input Latency 70.0 69.6 64.8 69.3
Observed First Paint 68.5 68.8 62.2 68.0
Observed Load 62.9 64.4 58.6 63.2
Speed Index 60.2 61.0 54.3 60.0
Time-To-Interactive 60.3 60.9 55.1 60.0
Total-Byte-Time 62.4 63.8 57.9 62.6
Time-To-First-Byte 64.5 66.1 59.2 64.7
         

Across every metric, the lion’s share of voice search answers performs better than the average of the top 10 non-answers, with Estimate Input Latency particularly noticeable at 70%.

First Paint score also appears as a key metric within voice search, with quicker primary content load speeds more likely to earn voice search recognition as ultimately it will relay the information back to the user more quickly.

Time-to-first-byte also appears to be one of the most critical influences with the speed index metric showing less prominent results, but still providing a strong indication that a faster page is beneficial.

Below is a table representing the queries with the greatest disparity between answers and non-answers.

Device Query Speed of Answer (milliseconds) Average Speed of Non-Answer (milliseconds) Answer is X Times Faster
Google Home/Mini Where does a bird live 207 3059.2 14.8
Google Home/Mini What are the rules of horse ball 260 3435.5 13.2
Android What is the typical size of a monitor 230 2267.8 9.9
Google Home/Mini When was Kyoto Founded 227 2389.8 8.6
Android Best recipes from New Zealand 701 4478.0 6.4
Android What do the colours on the flag of Zambia mean 317 1819.3 5.7
         

As you can see from the above data, pagespeed performance is one of the key points to address when optimizing for voice search. In some cases, voice search results were over 10 times faster than the average of non-answers.

From the results, the importance of pagespeed does seem to be slightly more relevant to Google Home and Home Mini in comparison to Android, which is most likely because the latter is still required to load visual page elements alongside voice search results.

Our backlink analysis of voice search answers provides one of the most interesting contrasts across the three devices, with keywords in backlink anchors and page titles much more prevalent in Google Home and Mini results than Android.

Over 50% of answers with Google Home and Mini had backlinks with an anchor that appeared in the question, compared to less than 45% in the remainder of the top 10 results in the SERP. However, with Android, that figure is below the remaining average, signifying a lack of relevance in the case of smartphone devices.

This was also the case with regards to keywords in the query appearing in the title of an answer’s page. A significantly higher percentage of keywords were found in Google Home and Mini answers compared to the rest of the SERP. In the case of Android, the findings were again inconclusive.

Consistent across all devices was the average number of image backlinks. The average number of image links in answers produced by Google Assistant is comparatively low, so there is clearly a connection here to the pagespeed and loading times of a page.

As you might expect from the above, page score and trust score were found to be favored by answers returned from Google Assistant, although answers from all three only offer a slight increase on the rest of the top 10 average of non-answers.

Schema, HTTPS & URL Depth

Our Schema findings were difficult to draw a definitive conclusion from. While the majority of answer sources did indeed use schema, the type distribution failed to indicate a clear winner over which type of schema would most likely influence an answer being selected from the SERP.

  Google Home [%] Google Home Mini [%] Android Phone [%] Overall [%]
[%] Answer Others Answer Others Answer Others Answer Others
No schema 36.1 28.8 36.1 28.1 29.7 33.1 34.7 28.4
Article 5.5 8.6 5.7 8.6 1.2 7.7 5.2 8.5
Organization 2.2 5.9 2.2 5.9 1.8 5.6 2.2 5.8
BreadcrumbList 1.3 4.0 1.4 3.9 1.1 3.2 1.3 3.9
WebSite 1.0 2.9 1.0 2.9 0.7 2.4 1.1 2.8
WebPage 1.0 2.5 1.2 2.5 0.3 1.9 1.0 2.4
NewsArticle 0.5 3.0 0.6 3.3 0.5 2.5 0.7 3.2
BlogPosting 0.2 0.6 0.2 0.6 0.3 0.5 0.2 0.5
Others 52.2 43.7 51.6 44.2 64.4 43.1 53.6 44.5
                 

Equally, HTTPS and URL depth also proved inconclusive, largely due to the fact that Google is already encouraging sites to adopt HTTPS and therefore the majority of results within the top 10, and beyond, had already added that extra layer of security.

In fact, of the queries in our study, 90% of results in the top 20 had already converted to HTTPS, proving it a vital factor for ranking well, and hugely relevant if your site has yet to do so.

semrush voice search study: https usage

In Conclusion

SEMrush voice search study - key findings. The main factors that influence the answers Google Assistant returns from a voice search queries are pagespeed, ranking in the top three results and, in particular, occupying a Featured Snippet position.

It is clear that the main factors that influence the answers Google Assistant returns from a voice search queries are pagespeed, ranking in the top three results and, in particular, occupying a Featured Snippet position.

By understanding and acting upon our 10 key findings, marketers should be able to return more answers from voice searches.

With an increasing number of searches on the Android Google App now voice specific, and 55% of US households set to own a smart speaker by 2022, it is vital to adopt a voice search marketing strategy sooner rather than later to stay ahead of the game.

Share
Share
Author Photo
Together with her team she has built one of the strongest international communities in the online marketing industry. Olga has expanded Semrush brand visibility worldwide entering the markets of over 50 countries. In 2018 Olga was mentioned among the 25 most influential women in digital marketing by TopRank. She speaks at major marketing conferences and her quotes on user behavior appear in media such as Business Insider and Washington Post.