English Español Deutsch Français Italiano Português (Brasil) Русский 中文 日本語
Submit post
Go to Blog

What's in a Name: SEO Should Become IO

Marianne Sweeny

I've been thinking a lot about SEO as we know it. It might be due to all of the gloom-and-doom blog posts since the latest Penguin update. You know the ones: "SEO is dead," "Matt Cutts is out to get us," "SEO is not dead but breathing its last,” "Now that Google is taking care of SEO what do the rest of us do all day?" All of this FUD (fear, uncertainty, and doubt) has our collective imaginations running wild. Here's what I think: SEO is not dead. Instead, it is going into an Optimization Witness Protection Program. Search Engine Optimization (SEO) should henceforth be referred to as Information Optimization (IO), with a new focus.

The information retrieval systems that we know as search engines have roots going back 40+ years to Term Frequency/Inverse Document Frequency (TF*IDF), the granddaddy of all information retrieval algorithms. TF*IDF stipulates that to make the first consideration cut for search results, one of the query terms must be present in the document and give points for frequent mentions of the query term. There are additional algorithms that make sure the long boring documents do not get primary position based on their size. As long as the pointy-headed academics were running online information retrieval, this worked great.



Once the Web went commercial in the 1990's, keyword stuffing and other rudimentary SEO tactics inspired a cadre of coders to take on the challenge of ensuring relevant search results. Next thing we know, PageRank appears on the horizon like that big alien ship in [pick your favorite alien invasion movie and insert title here]. PageRank factored in human influence over ranking by counting the links pointing to a page as endorsements of its merit. Pages with lots of links pointing to them had to be very good because they had all of those links pointing to them. There's also athesurfer element, the Random Surfer. This is because there was a lovely but brief time period where many of us would traverse the Web on an aimless information journey at $20 a month for 5 hours of 28K connectivity.

A relative calm ensued as the SEO community cleared the bookstores of anything remotely related to learning HTML in 10 minutes so that they could make a ton of money getting nonsense sites to the top of the SERP.Only humans that could build links got to “curate the Web.” Gaming PageRank became easy and fun. Sergey and Larry were very disappointed, as this is not how it worked out in the hallowed halls of Stanford with the academic citation model that was the foundation for PageRank (except for the surfer part).

It took a few years of Moore's Law for the empire to strike back with a massive update in late 2002 that brought the beginnings of semantic association between documents online. With this update, some documents were more authoritative than others (Hilltop algorithm) based on links from expert documents. Following along this semantic path, the search engines launched algorithms that applied a categorization to the Web (Hypertext Induced Topic Selection and Topic Sensitive PageRank algorithms) that better calculated what the document was about and applied further semantic mapping between documents. Double freaky, I know, and a standing 8 count for the SEO community.

The SEO community of self-taught hackers was not about to be stripped of their lunch money by those geeks in Googleplex Building 5. It was not long before "write-on-the-fly" articles and content farms started appearing in massive numbers on the Web. This, combined with a seemingly limitless supply ofoverseas link builders, brought sigh of relief through the SEO community as press releases were once again ranking #1.


Leave it to Google to take a heretofore icon of sweetness and turn it into an instrument of terror. Okay, JCpenney.com and Overstock.co had a hand in it withtheir egregious link building and parasite hosting that became news with The Dirty Little Secrets of Searchin the New York Times Business section. It is not a good day at the Googleplex when they see themselves at the top of the SERP for less than rescuing kittens from a blazing building.

Not long after, the Panda updates started rolling out. Google press secretary Matt Cutts tells us very little and leaves it to Vanessa Fox, former Google loyalist and still true to the cause, todistill the Panda update so that even 3rd world link builders can understand it: click-through (do they select the result?), bounce rate (do they do anything once they get there?) and conversion (do they return to the search results and select another from the set, refine their query and search again, or take a quick peak at Google News thereby signaling an end to their serious information search?) are the new SERP relevance cocktail. Fun times, but then it got worse.

April 2011 was the first appearance of the Penguin set of link spam algorithms where Google revealed once and for all their control over the search results. Sites not only lost position for their precious keywords, but some even lost positions for their brand. Now that’s harsh. Google tried the site in absentia and levied their index removal punishment with such indiscriminate speed that it makes a Formula One race seem like Sunday driving.


They say that a leopard cannot change its spots. I’m suggesting that we give it a try. SEO as it was is over. We lost the home court advantage while the engineers programmed sophisticated machine learning systems built on big user behavior data. Bill Slawski pegged it best in his post We're all Google's Lab Rats. Unlike real lab rats, we have opposable thumbs and smarts of our own. So let’s act like it. Let’s stop optimizing for the search engine and start optimizing the information that the search engines need to process their search results.

Information optimization is more interested in traffic and conversions instead of where you are in the results in a precise moment in time for a specific location, all based upon the customer’s user profile with the search engine. Information optimization is user research that is:

  • Realistic about what you can and want to place visibly for in search results. Being there means nothing if you don’t present the information that the customer is looking for.
  • Keyword research based on user research using data mining tools like Google Trends and sophisticated keyword mapping tools such as SEMrush. Research what customers are actually using to find the information that we have to offer as well as term phrases that represent related concepts.
  • Keyword optimization from a content strategy that includes content development and link models that tie together semantically related pages.

Information optimization depends on a content strategy where:

  • Bite-sized page content becomes authoritative and informative.
  • Flesch-Kincaid readability tool is your new best friend that helps you to develop deep, rich content.

Information optimization is a user experience design that moves away from page layout that is big on images and low on text. Page layout optimized for information maximizes the search engine-designated prime real estate that is found in the middle of the page and above the fold. This all has information that the customer wants and of course the search engine rewards.

Thought processing bipeds rule. Let’s take back control of the SERP by acting like one. Join me in the SEO Witness Protection Program and assume your new identity as an IO. Knowing how the search engines work and playing it straight with what they consume will force them to play it straight with our search results.

Marianne Sweeny

Provides valuable insights and adds depth to the conversation.

Marianne Sweeny is VP of Internet Marketing for Strategic Edge Partners, a medical marketing agency. She is passionate about optimizing the user search experience on the Web or inside the enterprise firewall. Marianne’s most recent article for SEMrush was entitled: "SEOs, Guns & Religion."
Share this post



Occasionally takes part in conversations.

Hello Reg, I am not in agreement with your launch date for Topic Sensitive PageRank as it came online much earlier and on the heels of the Hilltop algorithm. I do not believe that PR is scalable that that is why Google has been migrating away from it for the last many years. The link building plans you speak of are another reason Google no longer relies solely on PageRank.

Either just recently joined or is too shy to say something.

With a new focus on context and relevance, PageRank is suddenly MUCH more scalable, (and accurate). The old formula with it's strictly mathematical computations was a failure.

Although Google DID pull away from PageRank, some of their guides now say that PR is used.

Penguin #1 was a recalculation of PageRank based on the new relevance scores.

Either just recently joined or is too shy to say something.

Hi Marianne,
I have done a lot of research into PageRank and this is what I have found.
In 2008 Udi Manber told us that PR's influence had been reduced to "only one of 200+ signals" from its previous high input strength.
Then about a year later Susan Moskwa told us that PR was not a useful metric.
Hilltop came into being in 2004, but Google continued to make linking changes in 2005 (Jagger).
They also demoted PageRank in 2009 and '10.
They would not have done this if it had included PR>
Working with client's metrics, relevance did not show until 2010 when a brand new site got a PR4 in one jump (PR0 -> PR4) just 4 months after being published.
On the day that the PR jump happened, the site had 115 links.
One PR5
One PR3
113 PR0
All links were on relevant pages.
Just as the site was launched, Google brought in its Mayday update.
In her article about Mayday, Vanessa Fox reported that she pressed Google for more specifics and they answered: "that it was a rankings change, not a crawling or indexing change"
In this case the word 'rankings" HAS to refer to PageRank.
"Crawling" and "indexing" were not touched.
What change to PR would result in "higher quality sites to surface for long tail queries" but a switch from non-relevant to relevant PR?
Cutts also stated that: "It went through vigorous testing and isn’t going to be rolled back."
This follows with their focus on relevance.

Either just recently joined or is too shy to say something.

Reg Charie
Your research is impressive. Sadly, it is very difficult to predict exactly when Google does anything as they are not ones to tell after the Anatomy of a Search Engine paper that Brin and Page were forced to write by Stanford gave the core of pagerank over to the competition. I believe that this and the realization that the Web was not and never would be as cooperative an environment as academia for a relevance model based on citations along with the inherent flaws in PageRank, only individuals skilled in HTML could influence relevance with links, semantic disconnect and others, began the quiet migration away from inbound links as a driver of relevance to a query.

Either just recently joined or is too shy to say something.

Another reason to think traditional SEO is dead is Google's "Let's screw with their heads" randomizing filter. (https://www.searchenginenews.c...

"Topic Sensitive PageRank algorithms" did not appear until 2010 and were applied in the Mayday update.
This gave the SEO community years to "fine tune" linking plans.

Panda basically took care of the thin content websites.
Penguin was a refresh of their PR findings using the newly reworked relevance based PageRank system.
If you had a lot of links on non-relative pages you lost as your PR fell and you were penalized based on your anchor text on those pages.

Either just recently joined or is too shy to say something.

Google rose to power because it delivered the goods, when other engines where making us search though pages of results. I started playing around with SEO and building websites about a year ago. It's makes sense that if Google starts searching the user junk for results then they'll lose their momentum.

As a dinosaur from the Dot com era, I've a feeling Bing is waiting for Google to trip over their aggressive growth to sustain their stock value.

I'm writing due to the abuse of Google and their paid advertising! It's frustrating to rank number one and have two or three "paid" listing appear on top of you.

In the end, as your article has stated I feel that relevance and user experience will reign supreme.

I like your blog and have it book marked.



Occasionally takes part in conversations.

Philip Polaski
Hello Phil
Glad that you liked the post. You may be right about Microsoft waiting for something to happen to Google and it must might as Google grows away from its core offering of organic search. We have to help and we do so by using other search engines. Since Mozcon of 2012, I have been using Bing as my primary search engine at Rand Fishkin's urging. While teaching at UW this past quarter, I gave my students the assignment to execute the same searches across 5 different search engines. The list did not include Google. The results are that I find 90% of what I'm looking for on Bing and they discovered that Google does have some quality competition.

If we don't like the competitive landscape, then we have to break our addition to Google and start changing it.

Either just recently joined or is too shy to say something.

Nicely written article, Marianne. Well explained. I especially liked the Star Wars/Alien movie references.

I didn't get your point, though, about the Flesch-Kincaid tests. I didn't get whether you were implying that the content we add to the web should be more 'highbrow' and therefore more interesting or easier to read and therefore have a higher FRES score.

Send feedback

Your feedback must contain at least 3 words (10 characters).

We will only use this email to respond to you on your feedback. Privacy Policy

Thank you for your feedback!