Summary:Did you know that Google sometimes asks for user input on search results? I’ve heard about tests they’ve run in the past, but until a few weekends ago, I’d never experienced this myself. So why did I get the privilege when so few have? Perhaps it had something to do with the subject matter of my search, but it’s most likely random. Still, I’ll explain how I triggered the test so you can draw your own conclusions. We’ll also get an explanation straight from the horse’s mouth.
Early in the new year it occurred to me that I’d been wearing the same Columbia 3-in-1 winter jacket since high school and it might be time for an upgrade, so I headed across the bridge into downtown Burlington to check out what KL Mountain Shop (a.k.a. the North Face store) had on post-holiday sale. After rummaging around in the sale section for about 20 minutes I almost gave up before stumbling upon this sweet shell – The North Face Varius Guide – for 25% off! After comparing some prices via iPhone apps and realizing it was a pretty good deal, I whipped out my credit card and then got the hell out of there before they changed their minds! Seriously though, I actually hung around and picked the brains of the employees about what insulated layers will zip in and when I might find a deal on one of those. Long story short the employees were very helpful and I plan to stop back in another month or so, but in the meantime I’m keeping my eyes peeled and Googling occasionally. Saturday, Jan. 19 was one of those days.
Before looking for compatible layers for my new shell, I first wanted to see if I still got a great deal or if other retailers had since dropped the price, because I have a sick obsession with having gotten a better deal than everyone else. Bring a couch next time and we’ll go deeper on that issue, but I’ll say that as of today, for the color, size and year model combo I got, I still got a really great deal. But back on 1/19, I had started searching for my “Asphalt Grey” shell online, but I mistakenly though the color was called “charcoal”, and here are the results my inaccurate search produced (click image to enlarge or click here for FULL size):
Did you see that? Of course you did, it’s kind of hard to miss all that red. Setting aside the fact that I was incorrect in searching for the color “charcoal”, Google did its best to find related pages that contained all of the words I searched, but almost seems to be admitting that it didn’t understand what I was looking for by asking me to grade 2 of the search results. It was my day off, but as an SEO it was my duty to find out what was really going on. Clicking the the “Learn more” brings me to a page that is rather vague, but leaves me with the impression that these are random tests rather than reactions to specific, confusing searches like mine. They also explain the process from the user side:
When you search on Google, you may see a section to the right of your search results inviting you to “Help improve Google.” Answering this completely optional form lets us know how useful different search results were to you. We’ll use this feedback to improve the search experience.
Click Visit to the right of both search results in the box.
Think about how well each page fits your information needs.
Select the radio button that matches your opinion and click Submit.
Your options are to ignore the test all together or choose which site was most relevant, with options for saying neither were relevant or that they were equally relevant. They caution that “any input provided will not directly influence the ranking of any single page”, although they go on to talk about all of the testing and changes they make before closing the help article with this:
Your responses complement our existing evaluation methods by enabling you to provide direct feedback. This feedback is particularly valuable because it comes straight from you. We look forward to hearing from you soon.
This testing isn’t brand new, but still relatively new – The Street noticed that Google was asking users to vote on search results back in mid-November. Then Search Engine Land touched on it in December, although if you take a look at their screenshot, you’ll see that the call to action to “Help improve Google” wasn’t as pronounced as it was in my search results.
I have to say, this crowdsourcing is completely new to me, despite the fact that I’ve experienced most of Google’s other tests in the past (moving URLs above meta descriptions, changing navigation styling, etc.) Have any of you seen these new tests or anything like them? Let us know in the comments below!
Summary: These kids today and their abbreviations and acronyms, or in this case an initialism. I’ve basically given up trying to keep up at this point, but that doesn’t mean I don’t want to know what these things mean when I see them. This curiosity brought me on a short journey into some interesting search results pages recently and today I thought I’d share a few observations with you. As they become more pervasive, it’s going to be important to understand how you can harness the power of “intelligent search results” and use them to your advantage.
*SCROLL FOR UPDATES*
Before we get to the web definition of “SMH”, allow me to first correct my own headline. SMH, is an initialism and NOT an acronym. Ever since I learned this distinction, I take every opportunity to educate others. Break it down Grammar Girl:
Initialisms are another type of abbreviation. They are often confused with acronyms because they are made up of letters, so they look similar, but they can’t be pronounced as words. FBI and CIA are examples of initialisms because they’re made up of the first letters of FederalBureau of Investigation and Central Intelligence Agency, respectively, but they can’t be pronounced as words. NASA, on the other hand, is an acronym because even though it is also made up of the first letters of the department name (National Aeronautics and Space Administration), it is pronounced as a word, NASA, and not by spelling out the letters N, A, S, A.
I’m leaving “acronym” in the headline because that’s what most people believe is correct (and will search for), but I hope you’ve enjoyed your grammar lesson for the day.
SMH = Internet Speak for “Shake My Head”
Let me first say that I like to think that I do a good job of staying on top of what all the cool kids are doing on the internet, usually hearing about the latest memes and viral videos early in their cycles. It has, however, become increasingly clear that I’m quite out of the loop when it comes to internet shorthand – initialisms and acronyms that allow users to say more with fewer characters. I think most of us remember when “LOL” began to gain popularity over a decade ago on AOL Instant Messenger and other chat platforms. LMAO, ROFL and OMG were some of the other earliest, widely adopted internet initialisms to take root. But these days there are so many it can be tough to keep up. I’ll admit that I only learned sometime last year that ICYMI is internet shorthand for “in case you missed it”, and just a few months ago I found out that YOLO means “you only live once”. But I haven’t had one of those “you’re old!” gut punches in at least a few weeks… until last week when Honest Toddler tweeted this:
“Calm down” Sorry, I’ll try to find more subtle ways to enjoy life. smh
Now, I’m familiar enough with HT’s work that I figured this wasn’t just some inside joke I didn’t get and it didn’t make sense that “smh” would be someone’s initials, in context. So I did what I always do when I don’t know something – I Google it, and within seconds I had my answer. Via Urban Dictionary:
Acronym for ‘shake my head’ or ‘shaking my head.’ Usually used when someone finds something so stupid, no words can do it justice. Sometimes it’s modified to ‘smfh’ or ‘smmfh’ by those that prefer profanity in their internet acronyms.
So there you have it. SMH = shake/shaking my head. Now neither you or I will have to pretend to be hip enough to know what these millenials are talking about when we see SMH pop up in our Twitter feeds, Facebook comments or Skype IMs.
SMH Search Results
While finding out what SMH stands for took about 10 seconds, it’s what I noticed along the way that really struck me – Google’s search results for SMH. I always like to examine interesting first page search results whenever I run across them so we can all better understand the direction search is headed. Before I dive into the details, take a look at the screenshot (click to enlarge):
See how these results differ from regular old Google search results?
Merrill Lynch Semiconductors HOLDRS ETF stock quote results at the top, complete with buttons to change the date range as well as links to details on SMH at Google Finance, Yahoo! Finance and MSN Money.
The “Knowledge Graph” area to right has image (logo) and text links to the Google+ page for the Sydney Morning Herald. Just below is a photo, excerpt and link to Sydney Morning Herald’s most recent Google+ post.
Also in the knowledge graph is “place” preview info for Sarasota Memorial Hospital (another “SMH”). As with most things in the knowledge graph, however, this just links you to search results for “sarasota memorial hospital”, whose website domain is smh.com
Back in the organic results, we see both the Sydney Morning Herald and Sarasota Memorial Hospital, along with related twitter accounts, Urban Dictionary definitions for SMH, etc. Also smh.org – South Mental Health. Lastly there’s an app in the itunes store for the Sydney Morning Herald.
As you can see a search for ‘smh’ produces very diverse search results, not only in the organic listings, but in the “intelligent” search results sections – stock info and knowledge graph. I call these “intelligent” because they demonstrate well how Google is attempting to serve up search results in new ways based on its interpretation of search intent. The interesting thing is how Google is constantly testing, because I grabbed this screenshot last week, but at the moment I don’t see any Sydney Morning Herald Google+ info in the knowledge graph area at all.
*UPDATE – 2/4/13*
I just noticed that when you’re more specific with your search Google is able to better understand your search intent, allowing it to serve up different and better “intelligent search results”. Look what happens when instead of just searching ‘smh’, you search for ‘what does smh mean‘ and ‘smh definition‘:
Google provides these “definition” results at the top of SERPs quite often when you search “define” or “definition” along with a word, but this was the first time I saw it do this for what I think is a relatively new internet initialism. I also wasn’t aware, until now, that Google will return definition results when you search “what does ____ mean?”. Pretty cool, IMO…. but not without issues. In fact, Google’s “definition result” for ‘imo definition‘ is “International Maritime Organization.” This isn’t a *mistake”, per se, but isn’t it more likely that most people searching for the definition of IMO are looking for the ‘in my opinion’ internet shorthand definition?
What Does OCC Stand For?
Another great example is a search for “occ”. I was recently doing some research on content marketing opportunities for our client Iron Thread and the show “Orange County Choppers” crossed my mind. I’ve personally never seen more than a few minutes of the show, but I’m aware of its status in pop culture. In reality I only Googled it to confirm that I was remembering the name correctly. My memory hadn’t failed me, but it was the knowledge graph results that I found most intriguing (click to enlarge):
Fully 6 different organizations that use the initialism OCC are displayed in the knowledge graph area. Interestingly they’re the same as the top 6 in the organic results, and in the same order. This example is a great illustration of how the knowledge graph is a great way to show a snapshot of information (“knowledge”), because as you know you’re just taken to more specific search results if you click any of those items, rather than to the official websites of the organizations. However you can of course always click over to these sites by clicking on the corresponding organic results. Having the knowledge graph here can be especially helpful in situations where the website homepage doesn’t have a well written or descriptive meta description, because the descriptions in the knowledge graph are pulled from other “off site” sources, like Wikipedia. These 6 OCCs have decent meta descriptions (though not the best in all cases), however as you can see further down in the organics, the “Online Curriculum Centre” has an issue that causes the following to display under their link:
A description for this result is not available because of this site’s robots.txt – learn more.
That’s tantamount to throwing an error (in my world)! The next one down in the organics, Oklahoma Corporation Commission, isn’t as bad… but it’s not great either:
Home page for the Oklahoma Corporation Commission.
It may be accurate, but it’s also self evident. The tittle tag/linked text is already “Oklahoma Corporation Commission Home Page”, so why waste the valuable real estate in search results by repeating this same information? In a way it’s almost worse than the Online Curriculum Centre’s robots.txt issue. At least they can plead ignorance, but the Oklahoma Corporation Commission went to the effort of writing a meta description, but just did a terrible job. What in the world is the “corporation commission”? What exactly do they do? A descriptive and clear meta description here could answer those questions. This is SEO 101 people. If you’re in the same boat as these folks, check out SEOMoz’s meta description basics. Long story short, you’ve got about 155-160 characters to convince a user that they should click through to your site, so you need to use them wisely.
Back to the point – intelligent search results. As you undoubtedly noticed, 5 out of the 6 knowledge graph results also contain logos which is obviously helpful and allows users to more quickly find what they were looking for.
Have you noticed any other interesting results pages since the implementation of the Google Knowledge Graph? Feel free to post examples in the comments below!
Summary: Since 2011, Google has released animals into their search results to devour low quality search results and spammers. First Panda came to punish “low quality sites” where content was thin and/or “scraped” from other sites. Then in 2012 Penguin marched in and pecked all violators of Webmaster Guidelines, like link spammers and keyword stuffers. That got us wondering – What update animal will Google release on the world wild web next?
You may recall that in my 2013 SEO predictions post, I suggested that after Google’s success with Panda and Penguin over the last couple years, it wouldn’t be surprising to see them continue releasing other “animals” going forward. But which animals, and how did they ever choose the first two?
Believe it or not, Google named the 2011 quality update “Panda”, not after the endangered animal, but after one of the engineers responsible for the algorithmic breakthrough – Navneet Panda. It’s unclear how they came up with the name for the over-optimization update of 2012, but it seems they just decided to stick with the animals starting with the letter “P”, and so the Penguin was born…. er, hatched.
So assuming Google sticks with the “P” animal names for major updates in the future, I thought I’d take a stab at predicting what the next update animal might be. I’m actually going to play the odds and throw 5 of them out there to increase my chances of achieving oracle status, should any of my guesses prove correct. So without further ado, here’s your ticket to the Google zoo.
When pelicans scoop up fish in their beaks they also take in a lot of water. To drain and filter out that water before swallowing, they’re equipped with a large throat pouch. Consume the good, spit out the useless.
I expect that Google’s Pelican update will behave in a similar manner. When someone searches for seafood restaurants, all web pages that seem even remotely related to the user’s search will be scooped up and then Google Pelican will intelligently filter out the results that don’t actually apply. Think seafood search results aren’t an issue? Think again! I wrote that post back in September, and as I demonstrated toward the end, Google struggles to understand search intent for “seafood” searches, at least when searches are performed by people whose location settings place them in Boston. With any luck, Google Pelican will help the search engine filter out the seafood shows and restaurant food supply companies.
If Google decides to introduce the Platypus as its next ranking algorithm animal, we should expect the search engine to begin to track down spam sites in unique new ways and severely punish them. Similar to the way a platypus tracks prey through electrolocation (detection of electric fields that are the result of muscle contractions), the algorithm update will allow Google to more precisely detect spam signals. Then, without notice, the venomous Google Platypus will nail these spam sites with their ankle spurs. The poison will rapidly tank the rankings of any site the Platypus believes has used blackhat SEO strategies.
Google Polar Bear
It’s no secret that Google rewards websites with great content, especially content that is updated regularly. As a general rule of thumb, you should be actively updating your website, writing blog posts as frequently as possible and engaging on social media sites. Google hasn’t been looking for “set it and forget it” websites for years. They want sites that are in it for the long haul. Sites that are willing to work for there next meal, like the polar bear which is known for distance swimming up to 220 miles! Why should Google Polar Bear not expect you to put in some effort to get prominent positioning in search results? If Google decides to unleash its own version of the largest land carnivore and your site hasn’t changed since 1998, don’t be surprised when you get devoured by the Polar Bear.
Oh I can just see the headlines on SEO blogs now…. “Stuck By The Google Porcupine?” or maybe “Google Porcupine Spikes Rankings!” Anyway, porcupines are by and large nocturnal and not particularly social creatures. Similarly, the Google Porcupine isn’t in to “social for the sake of social”, so if you’re attempting to manipulate search results and social reach by buying Facebook Likes/Twitter followers or creating multiple fake accounts on any social media platform, you could be in trouble in 2013. Google Porcupine feels about social media spam the way Sweet Brown feels about bronchitis – “Ain’t nobody got time for that“.
Like the “real world” snake, Google Python will squeeze its prey into submission. Its prey? Sites that violate Google’s Webmaster Guidelines – aka Webspam. Sites that have used blackhat tactics will at first feel a little pressure as their rankings slip. If, after Google Penguin sends out unnatural link warnings, spammers ignore them, Google Python will continue putting the squeeze on and these sites’ rankings will continue to slide. Eventually constriction will squeeze the life right out of these spammy sites and they’ll be removed from search results all together, at least temporarily.
Unlike the real world snakes, Google’s Python won’t be restricted to Africa and Asia, although these continents will be major targets given how much webspam actually originates on them, as a result of supposedly legitimate SEO companies outsourcing “link building”.
What do you think are the odds Google will choose any of these critters as their next algorithm animal? If not any of those 5, how about any of the following:
If you’d like to take a stab at guessing the next Google update animal, please comment below – we’d love to hear your suggestions! Crap! You know what? “P” isn’t the only commonality between Pandas and Penguins…. They’re also both black-and-white. So maybe we should have been looking at animals like zebras, skunks and Holstein cows? Or maybe Google requires that both criteria be met – black-and-white animals that start with “P”, which would obviously be rather limiting.
Or what if I’m completely off the mark here, and Panda and Penguin are the only two animals Google has any intention of releasing into the wild? After all, they were designed to address the two major issues with search results – low quality sites (Panda) and webspam (Penguin). We also know they’ve continued regular updates for both since they were released so perhaps that will just continue and eventually there will be some overlap and a hybrid will be born… Google Pandguin…. or Google Pengda….
Yeeesh, that was disturbing to say the least, eh? Well, we’ll have to cover those mutants and any other potential Google animals some other time…
Summary: 2012 was the year of the Panda and Penguin for SEOs… so what sort of creatures should we expect Google to turn our world upside down with in 2013? Today we take a look predictions made by one of the industry’s foremost experts, Rand Fishkin of SEOMoz. We’ll offer our take on Rand’s predictions as well as a few of our own.
Well that didn’t take long… barely 2 days in to 2013 and we already had industry heavyweights weighing in on the future of most of the principle disciplines in online marketing, from SEO and local search optimization to PPC advertising. Over the next few days we’ll take a look at how some of these gurus expect the next 12 months to unfold and then I’ll give you my take.
Let’s start with Rand Fishkin of SEOMoz, who decided to score his 2012 predictions before gazing into his 2013 crystal ball. I’d recommend reading the entire post for details, but here’s a quick overview of his scoring method, followed by a summary of his eight predictions from 2012 and how he scored each:
Here’s how scoring works:
Spot On (+2) - when a prediction hits the nail on the head and the primary criteria are fulfilled
Partially Accurate (+1) - predictions that are in the area, but are somewhat different than reality
Not Completely Wrong (-1) - those that landed near the truth, but couldn’t be called “correct” in any real sense
Off the Mark (-2) - guesses which didn’t come close
The rules state that if the score is lower than +1, I’m not allowed to make predictions for the coming year. Here’s to hoping!
So how did he do?
1. Bing will have a slight increase in US marketshare, but remain <20% to Google’s 80%+
2. SEO without social media will become a relic of the past
3. Google will finally take stronger, Panda-style action against manipulative link spam
4. Pinterest will break into the mainstream
5. Overly aggressive search ads will result in mainstream backlash against Google
6. Keyword (not provided) will rise to 25%+ of web searches
7. We’ll see the rise of a serious certification program
8. Google will make it very hard to do great SEO without using Google
Again, you should read the entire post, but Rand fairly scored himself a +4, which is pretty good and means he was worthy of prognosticating on SEO in 2013. Like his revelations about last year’s predictions, it’s worth checking out the specifics of each of his ten predictions for 2013, but I’ll just include the headlines and my take on each:
#1: None of the potential threats to Google’s domination of search will make even a tiny dent
I completely agree, although at first I was going to take issue with Rand’s conclusions about who at least has the “best chance” against big G in the two zero one three – Amazon. To me it seems with Apple and Facebook’s audiences, they would be better positioned, however I have to concede they don’t have the technology. Amazon is much closer. For those of us that shop on Amazon, we know how intelligent their search engine really easy. Still, it’s not “product search”, not web search. So Rand is probably right, while no one is giving Google a run for their money, Amazon may siphon off a few “product” searchers who aren’t satisfied with Google’s product search, which you’ve probably heard is became an all pay-to-play service for merchants after the transition to Google Shopping last year. Basically your products aren’t listed unless you’re a paid advertiser, which means a lot of merchants may shy away. Amazon has fees as well, but they aren’t the same as paid advertising fees. Anyway, long story longer – I agree with Rand that we’re looking at status quo – Google dominance will probably continue moderate growth, but may shed some “shopping” users who seek out the superior UX and product availability at Amazon.
#2: “Inbound marketing” will be in more titles & job profiles as “SEO” becomes too limiting for many professionals
I can’t say I disagree with the prediction here either, but I’m not a fan of the trend. For starters, a few years ago when you heard “Inbound Marketing“, it was probably from someone at Hubspot… or someone who’d just listened to a Hubspot presentation. That’s not a bad thing, I’m just saying that their CEO and founder Brian Halligan coined the term and co-branded it well with Hubspot. But still, “inbound marketing” to me was what I’ve always just called “SEO”. As new things like social media came along, I considered to be aspects of SEO. To me “Inbound Marketing” equals SEO. So why coin the phrase? Well, I’m sure they would argue about subtle nuances, but I think they read the tealeaves about public perception of SEO. In a lot of people’s minds, SEO is blackhat link buying scam artists. As honest “white hats”, it’s nothing we’ve ever engaged in so I’ve always taken offense at being lumped in with them. So while I would congratulate HubSpot for their prescience, I’d also like to state for the record that I’m a stubborn person and I want to hold the line on what an “SEO” is. Don’t let a few bad apples spoil this bunch and don’t let uninformed “reporters” lump us together with them. Unrealistic? Either way, this is probably the least risky prediction of the bunch. No one should be surprised if SEOs don’t want to fall victim to the false smears and start using “inbound marketing” language as a shield.
#3: More websites will move away from Google Analytics as the only provider of web visitor tracking
I have to admit I don’t know much about the GA alternatives Rand mentions, but it would surprise me to see many people jump ship. Keep in mind that more than half of businesses in most states are still without a site and as they catch up with the 21st century, it’s a safe bet that most will get set up with GA, especially with Google being behind the big push to “Get Your Business Online“. We’ll meet back here at this time next year to discuss, but unless somehow these other analytics alternatives were able to account for the “(not provided)” keyword situation (which they can’t), I’m not seeing any sort of mass exodus. Granted, Rand is only predicting the trend will “be small, but measurable”… but then is it even worth counting as one of the top 10 predictions for the year?
#4: Google+ will continue to grow in 2013, but much more slowly than in 2012
This one is a little tricky. First of all, it’s kind of hard for Google+ grow “much more slowly” than the sluggish pace it’s crept along at so far. Because we all bow at the alter of Google, a lot of people created an account, but then never used it again. How do you convince a billion users to waste their time on G+ instead of on Facebook? At the same time, Google has a few things going for it – dozens of other “free” services that we use every day, like Gmail. The key to Google+ ever being successful is users being logged into their Google accounts. Well, in addition to Gmail, Google has services like YouTube, Blogger, Places/+Local, Analytics. Given the integration with these existing Google accounts and their strategic roll-out, I haven’t quite written Google+ off yet, assuming they still had something up their sleeve. But if Rand’s prediction is right and growth slows further, I think that could signal the end of another failed venture by Google into the social media world.
#5: App store search will remain largely ignored by marketers (for lots of defensible reasons)
And this prediction will remain largely ignored by me. I’m actually again basically in agreement with Rand, but I’m just not sure why this item even made the list, other than to help it get to the nice round “10″. Using the appstore to market something would require that you create an app, and that will simply only make sense as a strategy for a very limited number of businesses.
#6: Facebook (and maybe Twitter, too) will make substantive efforts to expose new, meaningful data to brands that let them better track the ROI of both advertising and organic participation
FB and Twitter have obviously both been behind the curve here so I’d like to see this happen, but I’m not confident it will. Call me cautiously optimistic for now.
#7: Google will introduce more protocols like the meta keywords for Google News, rel author for publishers, etc.
Not exactly a bold prediction, but I completely agree. Although I would also include expansion of the importance of rich snippets/structured data markup in that list. I would also expect we’ll see increased display of this data in both standard organic results as well as knowledge graph results.
#8: The social media tool market will continue a trend of consolidation and shrinkage
Another safe prediction that a recent/current trend will continue.
#9: Co-occurrence of brands/websites and keyword terms/phrases will be proven to have an impact on search engine rankings through correlation data, specific experiments, and/or both
As you know, honest SEOs have in recent years, been moving more toward content as a long term strategy and away from traditional link building methods Google has devalued (trading links, etc.). But a website’s inbound link profile can still have a profound impact on its rankings. This isn’t an argument against content, however, because a large reason we create quality content is so that others will find it and link to it naturally, if they find it relevant and useful enough. What Rand is talking about with “co-occurrence of brands and phrases” (or co-citations), is “mentions” on the web that don’t necessarily include a link. The principle is still the same, but the goal is to get other websites talking about your company website, even if they aren’t linking to it. It’s the same principle behind “citations” in Local SEO. I don’t disagree that this is happening, but I’m 0n the fence about how heavily Google will weigh these “mentions” when determining rankings. Rand’s logic is sound, but I just think that blackhat/webspammers will take advantage like they always do (remember, once upon a time “meta keywords” mattered), possibly forcing Google to reduce the impact on rankings. Then again, over the last year we’ve seen them answer low quality content and spammy links with Panda and Penguin, so perhaps they’re in the lab tinkering with a mutant Pandguin……. or Pengda, that might help them combat the inevitable abuse of co-occurrence “mentions”.
#10: We’ll witness a major transaction (or two) in the inbound marketing field, potentially rivaling the iCrossing acquisition in size and scope
Sure, maybe, but so what? Pardon the apathy on this one, but it doesn’t really impact “SEO” (in terms of strategy or how we do our work) unless you’re part of the company being acquired or acquiring. Any major acquisition would certainly be “industry news” worth talking about when it happens… but this isn’t a “Buy Out Prediction” list, it’s an SEO prediction list.
To summarize, I don’t really disagree with much of what Rand is predicting, but I think some items are rather inconsequential while some others are simply safe bets about continuing trends. As a whole it’s a good list, but I just don’t think the individual items measure up against his 2012 predictions in boldness. Using the same scoring system, it would be hard to imagine he could “lose”.
My SEO Predictions for 2013
As I said, I basically agree with most of Rand Fishkin’s predictions, but I’d like to go out on a limb a little further in mine.
Randomized Search Results
Using the word “randomized” may be overstating it a bit, but that doesn’t change the point – search rankings will increasingly change to the point where focusing on specific rankings of specific keywords will become a fools errand. We’re kind of at that point already, but I expect it to get worse, in the sense that you’re going to struggle if you’re still chasing individual rankings. SEOMoz recently published a post by Dr. Pete about how they’ve observed and tracked this phenomenon for 3 specific keywords. Essentially the conclusion was, after watching rankings fluctuate several times, even within a single day, that Google isn’t doing this just to make it harder for SEOs to figure out what makes a website rank. Rather, Google is reacting to the organic nature of the internet with constantly changing results. Here’s Dr. Pete’s wrap-up and interpretation of the data:
We can’t conclusively prove if something is in a black box, but I feel comfortable saying that Google isn’t simply injecting noise into the system every time we run a query. The large variations across the three keywords suggest that it’s the inherent nature of the queries themselves that matter. Google isn’t moving the target so much as the entire world is moving around the target.
The data center question is much more difficult. It’s possible that the two data centers were just a few minutes out of sync, but there’s no clear evidence of that in the data (there are significant differences across hours). So, I’m left to conclude two things – the large amount of flux we see is a byproduct of both the nature of the keywords and the data centers. Worse yet, it’s not just a matter of the data centers being static but different – they’re all changing constantly within their own universe of data.
The broader lesson is clear – don’t over-interpret one change in one ranking over one time period. Change is the norm, and may indicate nothing at all about your success. We have to look at consistent patterns of change over time, especially across broad sets of keywords and secondary indicators (like organic traffic). Rankings are still important, but they live in a world that is constantly in motion, and none of us can afford to stand still.
Now, the reason I think the randomness (as I’m calling it, even though Google would surely say there is science behind the organized confusion) will increase is because there will be increased competition. As I mentioned earlier, more than half of businesses still don’t have websites, but the number that do is growing all of the time. Google’s stated goal has always been to provide the most relevant results. Certainly they know that there are businesses that are just as “relevant” as others, but have a much smaller/newer footprint on the web. Wouldn’t it stand to reason they start to rotate search results in and out in the name of fairness? I’m not saying that it’s the right approach because it actually punishes that success of others who might have worked hard to obtain high rankings – but Google would say they aren’t interested in being manipulated. Again, if relevance is the goal, how can Google select just 10 results out of potentially hundreds that may be just as “relevant”.
Basically I’m saying the same things I’ve said for a while about the “local pack” that appears within search results, but the same principles apply to organics. Let’s review it in the context of the “local pack” though to better illustrate the point:
If you have 20 Chinese restaurants in your town, how does Google determine which 7 are the most relevant and worthy of being in the local pack? I’m not asking how they determine relevancy right now – we have a pretty good idea about many of the local ranking factors. But are you going to tell me someone couldn’t argue that at least one of the other 13 restaurants is equally as “relevant” as the other 7? I wouldn’t want to be in the position of being the arbiter of “fairness”, but the subjective nature of “relevance” would seem to put Google in that role, and you would think they would want to give everyone an equal shot. I suppose one could always argue that Google still has their own secret “relevancy” factors that dictate which 7 are the “most” relevant, but why 7? What if Google deemed a total of 8 to be equally relevant? How much potential business will #8 miss out on? And again, the point is the same for the 10 organic results as well. Think about poor #11.
Think about a brand new restaurant that has a brand new website, if they have one at all. Even if they create a Google+ Local listing, it doesn’t have any history, support from outside citations or reviews to bump it up in terms of “relevancy”, but it happened to be a block away from the searcher. Proximity is a factor, but it’s understandably not always a major factor because it shouldn’t be in certain industries – in restaurants it should probably play a pretty big role. Now, imagine this brand new restaurant has a lot of buzz around it and everyone around town wants to check it out, but a lot obviously don’t know they name.
If the hypothetical company described above was here in Winooski, a lot of people would obviously search for things like ‘winooski restaurant‘, right? Take a look at those results and tell me, where’s Donny’s Pizza, Pho Pasteur, or the newest addition, Misery Loves Company? (To name just a few) I can tell you just from living and working in the area that MLC is the most buzzed about and probably the most active on the web (daily menus posted on site, shared on Twitter and Facebook). Now I know that beyond the 7 restaurants in the local pack and the corresponding pins on the map, there are tiny pink dots that represent other restaurants. But who notices those? Besides, things don’t get much better when you click the map in those results which bring you to a page with 10 business/map listings at a time. That’s right, Subway, a half-mile or so outside of the city center is featured ahead of several other businesses right in the clump with the others that are featured. I’ve got nothing against Jared or $5 foot-longs, but come on! Which is more “relevant” to the searcher looking for a “winooski restaurant”, Subway or Misery Loves Company? Need more? The Windjammer and Pulcinella’s are in the top 10, but they’re in South Burlington!
Obviously Google isn’t going to ever be able to keep up with the constantly changing landscape of every industry in every region, so what choice do they have but to mix things up to ensure all potentially “relevant” results are being served? I’m not sure if that’s the ideal solution, but clearly there’s a problem.
Keep in mind, I’m not advocating randomizing search results. In fact it makes the job of an SEO more difficult when we can’t measure to track progress accurately. I’m just pointing out what only seems logical and “fair” if you’re in the “relevancy” business. Google has no legal or moral obligation to equal opportunity here, but if there are 20 search results that are just as “relevant”, how can the same 10 always be the ones that show up on page 1? Perhaps I’m oversimplifying a bit (I know location can impact results, etc.), but I think you get the point. Google became a success by providing the best search results/user experience, so it’s hard to say they should change things to “give a shot” to low visibility websites. And by no means should they be forced to randomize for “fairness”. Really it’s on the “low visibility” companies to figure out other ways to drum up business, rather than rely on free Google search results and complain that they aren’t ranking. But it just seems like Google is also not in the business of drumming up massive business for a half of a dozen companies in each industry (and region in many cases), while dozens to hundreds of others get virtually no attention.
By the way, if you’re concerned that I’m unnecessarily obsessing over the first page of search results, it’s time you took a look at one of the many “heat maps“, like the one on this BI post that depict user behavior. Depending on the source, these maps show you where users click, mouse-over or “look”. Obviously you can see that not only is it important to get on the first page, but you need to be in the the top few results on the page if you want your site to garner any significant percentage of user interaction.
While it certainly hasn’t and can’t fully resolve the issue I’ve described above, search results influenced by auto-detected user location do factor in and became increasingly common in 2012. And I’m not talking about the “local pack” here, but rather the results that look like regular organic results, while clearly being local/location influenced, whether geographic terms were used in a searched phrase or not – what I call “local organics“. Obviously this is another effort by Google to serve “relevant” results, but it still doesn’t address the fundamental question about competition I raised above. Is it “fair” (in Google’s own mind) to show just 7 of 20 local Chinese restaurants in the local pack and then 2 or 3 others in the organics?
Local organics only compound the problem described above. At least with the “local pack” Google can claim that they have an algorithm that determines the top 7 local/map/Places results and say “that’s that” for now, but that’s separate and apart from determining organic rankings. And again, we’re not talking about purely organic rankings here either. For example, Google throws up a 7-pack when your location is set to Burlington, VT and you search “chinese food”… but the top (seemingly) organic result is actually a local organic for North Garden Chinese in South Burlington. Fine, but what about others in the area that are at least as “relevant”?
Whatever Google does to resolve this issue, I believe “local organics” will be a part of the answer. Over the last year I’ve noticed that Google has gotten better at pulling a site from organic listings if the company is already among those listed in the local pack, so they clearly have their eye on these types of searches and results pages. As an SEO, it used to be great when I could get a client’s site ranking both organically and in local results, giving them 2 great pieces of real estate on page 1 of search results. Those days are all but gone at this point.
You may notice Yelp and other directories listed among the “local organics” quite often. Now it’s no secret that Google and Yelp have butted heads over the years, but you still regularly see Yelp among Google’s search results. Still, Google would obviously prefer that users stay within one of their own properties where they stand a chance of earning revenue if you click on a PPC ad. I think perhaps they’ll move directories, like Yelp, into another area in the search results page when specific local intent is clear, and free up some of the “local organics” space for individual restaurants. We watched them test out various new search result navigation layouts throughout 2012 before making the change official toward the end of the year. We’ve all wondered what they were up to with this change, which actually saw the results area move further to the left as search filter options were moved above. Could this be paving the way for a new, more “fair” layout for local results?
Google Shopping Will Become Free Again
I know that Google Shopping only transitioned to an all paid service for retailers last summer, but I think the lack of variety and competitive pricing that this will lead to, as well as Amazon’s superior user experience, will cause Google to rethink their decision. I’m definitely going out on a limb here, because Google knows they can get bigger advertisers to continue to pay to play, but this all goes back to relevance and user experience. I think they run the risk of turning off shoppers and if the shoppers aren’t there, the advertisers will eventually leave too. I can personally say I’m someone who stopped using Google product search, in favor of Amazon, over the past several months. From time to time I try to look at Google Shopping again, but I’m almost always disappointed.
This obviously isn’t as much a prediction as it is common sense, but I “predict” we’ll all continue to be inundated with spam. And I’m not talking about us as individuals with email accounts – I’m talking about businesses with websites and webspam. For those of us who don’t get involved with shady blackhat SEO tactics, it would be easy to forget that this garbage is still going on, if it weren’t for the fact that we run in to it all of the time. As someone who’s been in this industry for over a decade, I feel like the spammers should have been stomped out about 5 years ago… but it seems like it gets worse every day. But why? Competition. As more websites come online, it becomes tougher and tougher to rank. Desperate people do desperate things, like buying links, even against their own better judgement.
What’s even scarier than desperate people keeping spammers in business? The number of business owners who simply don’t know any better. I can’t count how many times a client has called us and said some variation of the following: “we just got a call (or email) from Google that said we need to buy links to improve our rankings”. The worst part? We’ve had these same scam artists leave these same types of pitches on our voicemail, and I can confirm that they do deceptively attempt to say they “work with Google, Yahoo and Bing”. And I know they read from a script because I’ve gotten the exact same, somewhat lengthy message from two different guys in the last year. Their wording is potentially ambiguous enough to keep them out of legal hot water, but the intention is clearly to make gullible business owners believe that they either work directly for the search engines or at least partner with them. Oh, and the best part? Because they are so tight with Google et al, they’ll sell you top ranking positions that they currently have “available”.
As I said this isn’t much of a prediction because web spammers are like weeds you just can’t seem to kill, but what amazes me is how brilliant they really are. And given that desperate and low information website owners aren’t going anywhere, there will always be a pool of gullible people willing to purchase some snake oil. What will be interesting is watching to see whether the spammers even feel the need to adapt given that 2012 was the year of the Panda and the Penguin on Google. Basically Google dropped a nuke article submissions and link buying and did so very publicly. The problem I see is that there will always be a crop of new people who don’t know better and will be sucked in. But can we reasonably assume that this group will shrink over time to the point where spammers are forced to change strategies? That seems unlikely, but we’ll see.
Tablet Detection – Desktop Sites
Do you own a tablet, or a smartphone for that matter? And do you get as frustrated as I do when you’re forced in to a dumbed down, feature reduced mobile site because the site detected that you were browsing on a mobile device? Fortunately some of these sites at least offer you the option to click a “full site” link in the footer, but 90% of the time I wish the desktop site was the default and “mobile version” was optional (maybe a pop-up on page load?). And while I can at least understand the argument for defaulting to mobile sites on phones, why would I want to see a crappier version of a site on my 10 inch Nexus 10? And yes, I’m aware that there are settings I can tweak on my own… but do you think most users will ever know or do this?
So I expect that more specific mobile detection will become necessary. How is this SEO related? Well, it’s not related in terms of search engine rankings, but it’s part of online marketing in that it plays a big role in converting visitors. How many weak mobile sites are people backing out of on their tablets, when more precise device detection could have brought them to the desktop site – the site the user is often familiar with after having viewed it on their computer. So this one is really half prediction and half wishful thinking, but I’m sticking with it given the tablet surge that we saw in 2012 (and the expected continuation this year).
Google Panda and Penguin updates will undoubtedly continue, but those names don’t strike fear in the hearts of spammers like they did when they were first released. I’m going to stop short of “predicting”, and just say I wouldn’t be surprised if 1-2 new Google animals are added to the zoo in 2013.
Don’t get caught up chasing a ghost, desperately trying to follow the “latest” trend in SEO. Just keep doing what you’re doing (assuming you’ve been creating content, engaging on social networks, etc. all along). Stick to whitehat tactics, avoid spam at all costs and supplement efforts with PPC advertising when necessary. Most of what has changed is relatively minor, but the picture remains the same and for the foreseeable future regular content and legitimate links and “mentions” from other sites will be key to your website’s success.
Do you have any predictions about SEO in 2013? Thoughts about Rand Fishkin’s predictions or mine? Let me know in the comments below.
Summary:Our latest case study of local search results, both in local packs and “local organics” – a search for “laser tag” on Google, Bing and Yahoo! Today we exam both the similarities and the differences in search results on the world’s top 3 search engines and revisit an issue we’ve discussed recently – Bing powered Yahoo! results pages differing significantly from Bing results.
In September I coined the phrase “local organics” as a term used to describe search results that appear to be standard Google organic search results (rather than part of a “local pack”), yet they are local to the searcher, based on their location settings (often autodetected). What I neglected to mention at the time was Yahoo! and Bing do this as well. I did mention this a couple of weeks later in a post about Bing powered Yahoo! results, but only in passing so I thought it was worth noting again after my random search for “laser tag” turned up some interesting results.
Why Laser Tag?
Honestly, I can’t remember – I first Googled the term several months ago. I can only say for certain that I wasn’t actually looking for a venue to play laser tag or a place to rent equipment. Something online made me curious about what type of results the search would generate, so I checked the 3 major engines and the idea for this post was born. As you’ve likely noticed, my search results case study posts center around pretty random searches. This is partly by dumb luck (writing about what I stumble across) and partly by design (examples of every day things people might search). The latter is obviously the more serious motivation because it helps us understand how the major search engines are determining what results to show, which illustrates the need for proper optimization.
Google Laser Tag
As I mentioned above, I first searched for “laser tag” on Google, which produced these results:
Now the first thing you’ll likely notice is that in the location settings, Google has decided that we’re located in the Boston suburb of Westford, MA. I’ve mentioned in other posts that this is caused by our Comcast business class IP address. This is easy enough to manually change, as you’ll see shortly, but I thought we should examine these results first because they include the local pack.
As you can see, Google is showing a “local pack” with 3 local results for laser tag centers in the area. At first I was a little surprised by this, but “laser tag center” is a legitimate category option in Google+ Local, so when Google interprets local search intent, they usually display a local pack if there are results to be shown. The surprise for me was the fact that “laser tag centers” were big enough to justify their own category.
But did you notice that the local pack wasn’t the only thing Massachusetts-y about those results?
#2 organic is a local (to Westford, MA) laser tag venue called “Laser Craze“. In fact Laser Craze also holds the #3 position in the local pack as well. The first result after the local pack, #4, is also a local Massachusetts laser tag venue, Laser Zone. Lazer Gate is another local venue in MA and it holds the #9 position. Clearly a great example of “local organics”… But what about some of the other organic results on this page – Why is #6 a place in Michigan, #8 in Virginia/Maryland, and #10 in Madison, WI? Google thinks we’re in Mass., so why would it be showing us local results from other states/regions?
My best guess here would be that we’re looking at sites that rank organically globally – meaning location settings have no bearing and it just happens to be that these “local” websites are ranking anyway…. Bolstering my case is the fact that I still see all three of those results in the fist 2 pages, even after changing my location to 05401/Burlington, VT.
Laser Tag Burlington VT – 05401
Take a look at the Google results for ‘laser tag’ after I changed my location to Burlington:
Because we live in a much less populous region and with fewer laser tag venues, Google has chosen not to show a local pack at all here. They are however attempting to show some “local organics” – first with Pizza Putt at #3. The #4 result is an interior, region-specific page on a site called “AllTimeFavorites.com”. Essentially it’s not a “local” site, but a type of directory that has a page for our local area. Unfortunately it appears to be an out of date/abandoned site that quite frankly is a huge mess and pretty useless. #6 looks promising as a “local organic”, however Lazer X Burlington is in Burlington, NC – not VT. C’mon Google, get it together! Why aren’t you perfect yet?! Seriously though, I cut them a lot of slack because the search engine really does a lot of amazing things that make all of our lives easier – but this is a bit of surprising error.
Bing Laser Tag
Bing produces similar “local organics” to Google. They show Pizza Putt again (which is the only laser tag venue I was initially even aware of in BTV) as well as a “Vermont laser tag” interior page of party planning site Punchbowl vendor directory:
Bing even includes several “local organics” in search results beyond the first page. Take a look at page 2 below:
As you can see there is a company called Green Mountain Laser Tag as well as the “Vermont Laser Tag” page on another national directory site – Fun Fix. I dug several pages deep and was still finding various “Vermont” pages on directory sites. I don’t believe I’ve ever seen “local organics” on Google past the first page. Either way, it’s always interesting to see how the search engines are trying to display localized results outside of local packs and when the searcher didn’t include any geographic language. For whatever reason, I’m not seeing the same on Yahoo:
Yahoo! Laser Tag
It’s not news that Bing powered Yahoo! results aren’t identical to Bing results and there’s still a lot of mystery around the variances, but I it particularly odd that Yahoo! isn’t showing anything local (organic or otherwise). I’ve mentioned issues with changing location settings on Yahoo! in the past, but in my experience there’s always been some default setting still. Today doesn’t appear to be an exception, because I get a local pack when performing a more obvious “local” search – “plumber”:
You can clearly see that Yahoo! thinks we’re in Williston, which is an improvement because Yahoo! has also always detected our IP address and placed us in a Boston suburb. At least Williston is a town in the same county, so I’ll call that progress. Just for the heck of it I attempted to change my location settings, which I can do manually or ask Yahoo! to auto-detect:
Unfortunately Yahoo! failed to detect my location:
Anyway, back to the point – Yahoo! didn’t show any local results for the “laser tag” search, so I want to point out again that despite what we read all over the web, they haven’t completely relinquished control of their results to Bing. We already knew that Yahoo! controls their local directory listings (items that show in their local pack) and we already knew that they still tend to mix up the organics a bit differently on the two engines, but now we also know that they differ in how they decide whether or not to show “local organics” results. I ran some other test searches and it’s almost looking like Yahoo! isn’t showing local organics at all right now. Maybe a test?
The conspiracy theorist in me can’t help but wonder if the reason Yahoo!’s results are less “local” is a strategy being used by Bing to further take away search engine market share. Is this Bing orchestrating a managed decline of Yahoo!? Or is this just a continuation of the slide that’s been underway for years?
Whatever the case, the search results on Google and Bing tell us that if you own or manage a laser tag arena or any other type of local business, you should be claiming all of your local directory listings to build your citation profile and you should be optimizing your website with geographical keywords. It would seem that “local organics” are here to stay (and increasingly becoming part of the expected user experience) so it behooves you to stay ahead of the curve and make sure you’re doing all you can to optimize your website and online presence for local search.
Summary: Unlike most Google Doodles that are more for entertainment value, today’s Doodle actually links you to their handy voter information tool. This tool allows you to enter your address and get the address/directions for your local polling place and ID requirement information. The tool also provides a list of the candidates, their party affiliations, as well as links to their websites and social media pages.
I’m a little late to this party, but I would be remiss if I didn’t blog about Google’s Voter Information tool – another very useful and free service provided by the big G. Those of us in the “web world” have been aware of this tool for several days now, but only today when Google released an election themed Google Doodle did most Americans probably become aware of it:
If you click on the Google Doodle itself you’re just taken to the search results page for “Where do i vote“, which has been modified to include this address search form at the top, which will allow you to locate your polling place:
You can type your address in or text link seen at the bottom of the Google Doodle screenshot above, you’d have been taken directly to the Google Voter Information tool. As you can see, there is only one polling place in Winooski, but Google provides a map, a link for directions and the hours the location is open. They even include Vermont voter ID requirement info.
But this tool is much more than simply a polling place locator. To the right of the polling place information is a list of the candidates that are on the ballot in this state. Even the lesser known presidential candidates differ on a state-by-state basis because they didn’t all meet the ballot requirements in all states. Anyway, Google has a convenient little carousel that let’s you cycle through the different races. Once you select a race, you then see a list of all candidates on the ballot, links to their websites and social media (where applicable), search results for their names, and party affiliation.
TIP:When checking for your polling place, be sure to put in a complete address, not just the name of your city/town as I did at first. Searching just your town name gets you hit with this error. FWIW, I also ran in to this same issue when checking a number of addresses of businesses in Burlington, but residential addresses worked. I’m not sure how (or even why) the tool would filter out commercial addresses, but you should be aware:
So there you have it, another great tool offered by Google gratis. Now use it – Find your polling place and vote before polls close in a few hours!
Summary:A couple of weeks ago, Google’s earnings we mistakenly released prematurely and the revealed revenue dip caused a panic among investors. Ever since it seems like every day Google has released some new gimic that seems geared toward increasing ad revenue. But do they really have anything to be worried about in the first place? If so, how worried? And will any of these “fixes” actually change anything?
You probably already heard about Google recent stock panic after someone mistakenly and prematurely released a disappointing earnings report, but here’s the gist if you didn’t:
Google’s stock plunged after it released its third-quarter earnings report early, apparently by mistake.
The company’s stock was down 7.4 percent in afternoon trading, at $699.51. Google was set to report its results after the market closed. The sudden drop in the stock led its trading to be suspended on NASDAQ.
In the regulatory filing, Google said it earned $2.18 billion, or $6.53 per share, during the three months ending in September. That compared with net income of $2.73 billion, or $8.33 per share, last year.
The earnings would have been $9.03 per share, if not for Google’s accounting costs for employee stock compensation and restructuring charges related to the acquisition of Motorola. Analysts polled by FactSet were expecting $10.63 per share, on average.
Revenue climbed 45 percent from last year to $14.1 billion. Excluding compensation for websites that generate traffic for Google’s ads, revenue was $11.33 billion. Analysts were expecting $11.5 billion.
So it all boils down to making $0.17B less than expected? Granted, falling short of expected earnings by $170M isn’t chump change, but as a percentage it’s not even 1.5% of the $11.5B that investors were looking for. To me, $11.33B doesn’t sound like anything to be upset about, but I’m not a serious Wall Street expert. The Financial Times digs a little deeper and seems to shed some light on investors’ larger concern – Q3 net income may have been $2.18B, but that was down 20% from $2.73B in in Q3 last year.
Google has seen a decline in CPC for four consecutive quarters, but its CPC rose for eight consecutive quarters before it started falling. The reason the downward trend in CPC is sticking is because mobile is taking over the web.
You’ll recall that mobile ad revenue, or lack thereof, was what led to the Facebook’s stock tumble in their first quarter after IPO. Per share value at FB was eventually cut in half, however it jumped 20 percent just yesterday on news that 14 percent of their revenue now comes from mobile ads.
I wouldn’t go as far as to say Google should take lessons from Facebook, because I think in the grand scheme of things, Google is better at virtually everything it does including and probably especially advertising. Even with the revenue concerns, Search Engine Land points out that Google Adwords is bringing in $100M per day! The one area FB wins is as a social network, but the only reason people aren’t using Google+ instead is because everyone already uses FB. Hard to transition a billion people away from something familiar. Regardless, “mobile” is a new advertising platform that these companies will have to figure out.
Is Google the Next Yahoo!?
CNBC actually recently spoke to an analyst, Eric Jackson, who is making the alarming assertion that Google could “disappear” in 5 years. Mind you he’s clearly hedging by using the word “could”, but it’s still a pretty shocking claim. He elaborates by saying he means “disappear” in the sense that Yahoo! has “disappeared”. I’m not sure I would go that far in describing Yahoo!, but I understand the point – even internet behemoths like these can be brought down in this fast-paced, constantly changing world. In fact I’ve made similar claims myself – I can see Facebook “disappearing” in just a few years, particularly if Google+ plays its cards right.
Anyway, Mr. Jackson is right when he says Google’s decline “could” happen, but why I don’t think it’s likely – ad dollars will eventually go where web users are. The claim that people don’t want to spend their ad dollars on mobile may be true, but if that’s where the consumers are, they will eventually be forced to shift their ad dollars. What about another competitor beating Google to the punch, you ask? Who? Bing? Get real. Besides, it’s not as if they’re immune to the same issue. In fact, in my mind, the nail in the coffin of this Google “disappearing” theory is tied in to something Jackson himself points out – mobile ads degrading user experience. I agree that they can/do degrade the experience already on FB, but again – what’s your alternative. In Google’s case, it’s not as if some other search engine is lurking in the shadows with some master strategy for displaying ads in a way that won’t upset users, and all they have to do is wait for Google to stumble and then swoop in! If anything, the big G is light-years ahead of anyone who would even dream of taking their place.
Sure, “it happened to Yahoo!”, but that’s because Google came along and was better. You expect me to believe something is out there right now that can best Google? At this stage of the game I just don’t see it, especially with Google expanding into so many areas like mobile phones, tablets, computers, operating systems, etc.
Google Adwords Panicking?
The reason I actually wanted to write this post today is to highlight some Google moves this week that I interpret as panic. It seems very uncharacteristic of them, and in my opinion the panic isn’t warranted, but I don’t know how else to describe it.
Google to Combine Mobile & Desktop Ads – 180?
The first freakout I saw this week was detailed in this Search Engine Watch blog post that details a major shift coming soon to Adwords – the combining of desktop and mobile ads. Here’s how Google CEO Larry Page explained the move:
“As more users upgrade to Google+, more users are enjoying amazing experiences across devices. In the same way, we want to make advertising super simple for customers,” Page told investors. “There are separate campaigns for desktop and mobile right now. This is more arduous users and mobile opportunities possibly get missed. Advertisers should be free to think about their audience while we do the hard work of dynamically optimizing their campaigns across devices.”
First I just have to take issue with his claim that there are separate campaigns for desktop and mobile. They often recommend this and in many cases it’s a good idea that allows you to optimize separately, but it’s not a forced setting. In fact, despite all of the specific targeting options available (see screen shot below – click to enlarge) “All available devices” is actually the default setting.
Again, Google is constantly encouraging us to make sure we run separate campaigns for mobile, but by default campaigns are set to run on both desktop machines and mobile devices. So at first glance this “change” they’re announcing doesn’t really seem like anything new, although if you read the whole SEW post/critique, there does seem to be some sort of shift, it’s just not 100% clear to me exactly what they’re changing. It sounds like they’ll want us to write separate mobile and desktop ads within the same campaign and Google will just serve up the appropriate one, rather than just trim some of the text as it does now when desktop ads appear on mobile. Whatever the changes, my point was that Google is doing a 180 here. SEW nails it here:
Just over a month ago, Google told us, “Consumers seeking retail information are looking for things they can act on immediately. They still prefer to do deep research, read reviews, and make big purchases on desktops; making contact and taking action are their priorities when mobile consumers are on the go.”
And points out that this will be a tough sell, especially for advertisers:
Google will have their work cut out for them convincing advertisers that mobile ads are any more effective if they can’t tell their performance apart from desktop ads. Investors aren’t apt to go for less transparency, either. As much as the “users want the same on mobile and desktop” mantra was repeated yesterday by Page and other Google execs, it just doesn’t make it so.
Time will tell what this change will actually mean for all parties involved, but the timing and the fact that it’s all about “mobile” certainly indicates to me that this is a reaction to the revenue issue that resulted in the stock tanking just days earlier.
Dynamic Search Ads
Then came the announcement last Thursday that Dynamic Search Ads, previously in beta, had been rolled out to the public. Under different circumstances, I’d say this was just a new feature release, but can you blame me for being suspicious given the timing? By the way, if you dig through some of Google’s materials, you’ll find that DSAs are another automation feature in Adwords. While some, like daily budgets, are great, I’m always skeptical of decision making by machines. I’m testing “Conversion Optimization” automation on one campaign right now, but I’m still pretty skeptical.
In addition to the other big moves we just went over, I got two emails from Google Places last Tuesday, each of which is pushing Adwords Express with the promotions “free setup support” and “$100 when you spend $25″. I received these because emails tied to Google Places listings for a couple of local SEO clients get forwarded to me. These types of emails aren’t unheard of, but they aren’t exactly common either. Again, consider the timing. In fact, given that Google Places is all about “local” and “local” is huge on mobile, I can see this being an area they start to push hard on.
Google Chrome for Windows 8 – GetYourGoogleBack.com
I’m glad I delayed publishing this post last Friday because we now have another example of Google showing signs of nervousness – a website (GetYourGoogleBack.com and a video dedicated to showing users how to get the Google Chrome app on Windows 8 and how to make it your default browser. (h/t SEO Roundtable)
In fairness, this is likely something they’ve been planning for some time and not a reaction to the earnings news, but it comes off as a little desperate when you look at all of these changes in totality. I also think a little worry is warranted on their part here due to a shrewd move on Microsoft’s part here. Past versions of Windows have obviously shipped with Internet Explorer as the default too, but the major change with Win 8 is that it now relies on apps and that by default the app forces you to search Bing from your desktop.
The process by which users can get Chrome on their new computers isn’t actually difficult, but it’s different and new, so I think Google is justifiably concerned that some people might put off figuring out how to get Chrome back. And what if those people actually start to like their experience on Bing? Loss of any search engine market share would likely mean ongoing revenue issues for Google (if a smaller percentage of users are using your engine, a smaller percentage will be clicking your ads).
So what should we make of all of this? Google clearly remains the undisputed king of search, but increased mobile usage is causing revenue issues. But mobile advertising issues apply to all competitors as well, so I’m not sure how big of a concern this should be. I also think Google has enough smart people working on ways to overcome these issues that they’ll have it sorted out before anyone else.
The Windows 8 issue is more of a cause for concern, in my opinion, but if people like Chrome, I think they’ll go looking for it the same way they always have. Google just better hope that the new Bing default web search app isn’t fast and doesn’t satisfy searchers needs before they go out looking for Chrome.
The bottom line is that I don’t think Google needs to be in panic mode, yet, but they should be making plans to deal with these obstacles. In fact that may be all that we’re seeing, but the fact that they’re throwing all of this at us in just the last week, following the stock value drop, seems to indicate they’re being reactive, rather than proactive, even if that’s not the case.
Summary: As a follow-up to our recent post on Bing displaying product images in search results that weren’t appropriate for all ages, we decided to give credit where it’s due – Bing seems to have cleaned things up a bit, but now Google seems to be stumbling a bit in this area as well. Today we examine additional examples of search engines displaying adult oriented search result for all audiences, regardless of filter settings.
Earlier this month I couldn’t help ribbing Bing a bit over some adult targeted shopping items find their way into non-adult product specific searches. Essentially Bing was showing “sexy” women’s Halloween costume shopping results embedded within organic results for the innocent search “halloween”. The fact that they were somewhat mixed in with toddler and baby costumes didn’t make it any better. Anyway, it was a lighthearted critique but I thought I’d check back in on the world’s #2 search engine and see if anything had changed in the past couple of weeks, and surprisingly they have!
First, in case you missed it, here’s the screenshot with “adult “Halloween costume shopping results I took of the results for “halloween” earlier this month (click to enlarge):
Now, let’s compare that with what we’re seeing today:
Much milder, wouldn’t you say? You’re welcome America! What’s that, you don’t think I can take credit for it? Well, I just did. Deal with it.
Seriously though, whether someone at Bing actually saw my original post or not, they clearly spotted and corrected the problem. And should you think this is just a fluke, and that the 5 shopping results they happen to be showing right now are more universally age appropriate, I clicked through to the full shopping results:
Okay, so there are a few inappropriate products still sneaking their way in there (circled in image above), but you can click through several pages of shopping results and find that they are few and far between. It’s not perfect, and obviously no one wants their children see “the flasher” costume, but there’s no question they’ve made an effort to clean it up. Of course, this change won’t offer any comfort to Molly Wood, the other victim of Bing’s adult search result issues we talked about in the original post.
While Bing seems to be straightening things out with these Halloween searches, Google appears to be headed in the opposite direction to some extent. Granted, Google isn’t pulling “sexy” costumes into organic results for the word “halloween”, but they are showing one (and it’s the only costume they’re showing) in results for “halloween costumes”:
That’s right, Google is featuring one shopping result on the standard organic results page and it’s for a “Sexy Banana Costume” from lingerie and sexy costume retailer Yandy.com. If instead of selecting that individual costume you click through to additional shopping results, you can see that it’s a bit of a mixed bag.
There aren’t any “sexy” costumes on the first pages of shopping results, but there’s a “sassy” and a “flirty”, and I wouldn’t say that plug/outlet couples costume is rated G by any stretch of the imagination. Digging through several pages of search results it doesn’t appear they’re filtered in any way and actually seem to be mixing together more “sexy” costumes with childrens and other more tame costumes than even Bing. Take a look at the variance in just page 10 and 11:
Page 10 shopping results are clearly costumes are clearly adult “sizes” but they aren’t “adult” oriented, whereas almost every costume on page 11 has “sexy” in the name. By the way, there is one commonality among all 3 of these examples, and oddly enough it’s most present on sexy page 11 – “Plus Size” costumes. Yeesh, maybe we should be more concerned about that? With all we hear about childhood obesity these days, this is like a double whammy for the kids – images that aren’t age appropriate but at the same time essentially normalize being overweight. I’m of course joking about this “plus size” costume critique (mostly), but I think we can agree it’s a little odd.
Before I wrap this post up, I want to point out another example of adult/general search results being mixed on Google. It’s only fair after I let them off pretty easy in the earlier post because they did a great job of handling the Molly Wood situation. After recognizing that the “sexy banana” costume came from a lingerie shop, I couldn’t help but wonder how Google might deal with search intent on the word “teddy” which is obviously open to many interpretations (bears, Presidents of the U.S., lingerie):
As you can see, Google is giving us a variety of options because search intent isn’t clear. Logically, this makes complete sense so I don’t mean to quibble with Google over the relevancy of their search results, but I’ll go back to the central point – parents wouldn’t want their kids seeing women in lingerie on such a non-adult oriented search. And before anyone asks if SafeSearch filters were turned off, they weren’t – I was using the default “Moderate” setting. I even performed the same search after changing the setting to “strict” filtering and got the same results:
It looks like Google is only filtering out pornographic images or things that it has otherwise somehow deemed inappropriate, but it seems odd that they haven’t decided lingerie fits in to that category, which is even more odd than “sexy” costumes because you can at least understand how those get lumped in on a generic “halloween costumes” search. In the end I think this boils down to search engine technology just not being as smart as we sometimes assume it is. Google and the other engines really do an amazing job and most of us can’t even begin to comprehend how they do it, so clearly I’m being a bit of a nitpicker here, but it’s always worth noting that machines aren’t infallible. That’s just one of many lessons I learned from the Will Smith blockbuster I, Robot.
Last week I showed you a few examples of the Google Knowledge Graph and Carousel in action, and as part of that post I touched briefly on “rich snippets” when pointing out where Google was pulling data from to display in the the “Upcoming Events” section in the Knowledge Graph.
Note that in Burlington’s Knowledge Graph section, Google includes a scrollable “Upcoming Events” section and a “Points of Interest” section. As always, following any of these links simply takes you to drilled down search results pages, which doesn’t make it easy to figure out where Google pulls that data from. I do however see that most of those search results pages appear to be displaying structured data (a.k.a rich snippets, a.k.a schema markup) from a variety of other event sites. Basically Google couldn’t rely on a single database to supply all, up-to-date event info so they appear to be using websites that are marked up with rich snippets like ticketmaster.com, excite.com/events, zevents.com, seatgeek.com, songkick.com, etc.
Today, I’d like to explore at least the basics of “rich snippets” a little further. A couple of weeks back when I simply searched for “burlington vt”, one of the “upcoming events” was Gabriel Iglesias at the Flynn (9/27). Here’s a screenshot of the Knowledge Graph portion of the search results page for “burlington vt” shown at the time:
Upcoming Event Data in Google’s Knowledge Graph
Again, the Knowledge Graph helps you drill down to more specific things you might be looking for, so when you click anything in it, you’re usually directed to more specific results pages. This Gabriel Iglesias “upcoming event” item in the Knowledge Graph on the “Burlington VT” search was no different. Let’s go back and take a look at the screenshot I took of those results at the time, this time with a handful of numbered items I’ll explain momentarily:
As I mentioned, Google seems to be pulling event data from multiple sources and it’s clear that a lot of ticket brokers, etc. have this information in their databases. Of course, not everyone has figured out how to use rich snippets on their site to better feed this data out to Google for display in search results pages, but clearly many have. Note that Google seems to be recognizing that 5 of the 10 ‘Gabriel Iglesias’ search results shown on page 1 appear to be using rich snippets/schema mark-up/structured data of some kind. That’s half of the results on page 1! But I should point out that they aren’t all displaying structured data in the same way:
Examining Structured Data in Search Results
TicketMaster – Lists 3 upcoming venues for the comedian, with dates. Links go to individual TicketMaster URLs, two of which redirect to other ticket purchasing sites. (FlynnTix in one case, which just so happens to be our client.)
LiveNation – Lists just the Flynn event on 9/27. This page doesn’t redirect to the Flynn site, but if you click to “Find Tickets” on the page, it does bring you there. Despite only showing 1 show time result (rather than 3), I actually think this result is better because it’s the most relevant to the Iglesias/Flynn search (see search box in screen shot.)
VividSeats – Lists 3 events again, but they’re different than TicketMaster. TicketMaster’s third item is a show in Westbury, NY on 9/30, but the third item here is a 9/29 show in CT. Notice also that the linked text in each item is now just the venue name, whereas the TicketMaster used the show name as the anchor text and each was followed by the venue name in plain text.
Excite – These results use the same format as the TicketMaster items, however they too have a 9/29 show that TicketMaster didn’t.
SonicLiving – Lists only the single event, like LiveNation, however nothing is linked this time.
So why all the variation? Let’s explore that a bit…
Given that this all happened a week ago I can’t go back and check the code on each page, but I did look it over some at the time and I can say that not all of these sites are being marked up to the exact specifications laid out on Schema.org. Google has been supporting other markup/tags for several years, and there’s no indication they plan to end that support, but they do seem to be encouraging new websites to stick with schema.org markup. At any rate, that’s one major difference between these various sites – they all seem to be using methods for marking up their data. The consistent markup vocabulary shown on schema.org recognized by Google, Bing, Yahoo! and others, but it’s still relatively new (started summer 2011)
But different markup language doesn’t explain 3 upcoming events vs. 1, missing dates on some sites, etc. What’s causing these inconsistencies? Let’s take them one at a time.
It’s impossible to be certain why the 3 events TicketMaster had listed didn’t match up exactly with VividSeats and Excite. I can speculate that either TicketMaster has incomplete data, or they are such a major player that they actually have the best up-to-date data and these other small players (VividSeats and Excite) didn’t get an update that the 9/29 show was cancelled. I can’t find any evidence of the latter, so I’m actually leaning toward the former, despite the fact that one would think TicketMaster should have as reliable data as anyone. Of course the discrepancy could be caused by something else entirely. It could have to something to do with how their structured data tags are set up or even just how Google has indexed them.
As I noted above, Excite and TicketMaster are displaying their 3 items differently than VividSeats. This too could have something to do with how Google has them indexed, but in this case I suspect it has more to do with how they’ve set up the tags on their structured data, as I explained a few paragraphs back.
I’m not sure why Google only showed one event for LiveNation (again, it really could be anything), but I think I know why they only showed one for SonicLife. Ignoring specific markup language for a moment, the page indexed in regular organics for SonicLife is the only one of the 5 examples that goes directly to an individual show details page. All others were general Gabriel Iglesias “upcoming shows” pages, which listed all upcoming shows, but Google displayed structured data for the most near-term upcoming shows underneath each of those results. Displaying these deeper links wasn’t necessary for the LiveNation result, because the normal organic result was already bringing you to the specific show details page. So they just showed the general info (from structured data) about that show to let users know it was the page they were looking for. And this arguably takes users to a better quality landing page, IMO.
Why Aren’t You Using Structured Data?
I bet some of you are thinking, “who cares about all of this?” Answer: potentially anyone who runs a website. The fact that Google is displaying additional data within standard organic results presents a great opportunity for virtually all website because each individual result can now take up more real estate on the page while often driving users directly to specific pages within your site that are the most relevant to them.
Sure, there’s no guarantee that Google will show your structured data, even if you mark it up properly, but doing so is quickly become best practice. There’s no downside and there’s a chance that Google will scrape and display your rich snippets in search results in the future, if they aren’t already doing so. Why else would they have come up with a monster list of tag that virtually anyone could use? Seriously, they have everything from comedy clubs and notaries to dietary supplements and police stations?
I used a comedy show example for the purposes of this post mainly because I knew it would be easy to show how Google is pulling rich snippets into the results already, but there are countless others I could use as well. Before I wrap up this post, take a quick look at all of the unique pieces of data being pulled into recipe search results:
You can clearly see that Google is displaying at least the following items from recipe websites for recipe related searches:
5 Star ratings and number of reviews
You’ll also see, like with the show info, that the layout and amount of information again varies from one search result to another. Some include all of the items listed above, some just single bits of data like prep/cook time or reviews.
I glanced at the code on these sites as well and I can see that some of them (AllRecipes and MyRecipes) are using schema.org markup language. Google actually seems to be recognizing some schema.org markup in concert with other formats to scrape some of the data being displayed in search results.
You also might have noticed the filtering options to the left of the search results. You can select “Yes” or No” check boxes next to various common ingredients as well as filter by cook time and calorie count. This is another example of what I call “Smarter Search”, because it’s just amazing how intuitive and sophisticated Google has gotten. It’s providing these filtering options simply because it knows I’m doing a recipe search. How great is this feature for people concerned about specific food allergies, time and calorie intake?
Stay tuned for another post on search filtering options once I’ve had time to do more research, because I’m sure there are other types of searches that trigger them – I just haven’t seen them yet. But I think this feature is yet another reason it may become increasingly important to have your structured data marked up. How about you? Has Google produced search results filtering options based on any non-recipe types of searches you’ve done?
Bottom line: Get working on your Rich Snippets ASAP, but always be sure to test out your markup using Google’s Structured Data Testing Tool available through Google Webmaster Tools. As an aside, you should create a Webmaster Tools account and submit your website’s sitemap if you haven’t already.
When implementing rich snippets, keep in mind that you shouldn’t have to edit code on each individual page if you’ve got a large database driven site like these ticketing agents and recipe sites. You should be able to set these tags up in template that would apply across all details pages throughout your site. We understand that this is a skill set outside of what most business owners will have. So feel free to drop us a line and we can help you sort through your options and talk about how we might be able to help.
Summary: Since June there have been reports of users randomly seeing changes to the layout of Google search results pages. It wasn’t until today that we saw the test version ourselves. The changes they’re testing are somewhat subtle, but this could signal a full roll out soon, given that this has been ongoing for several months now.
Quick, how many differences can you see in the two side-by-side images below (click to enlarge):
The “normal”, familiar search results page is seen to the left and the version they are testing is to the right. Were you able to spot the changes?
Take a look at the same image, but this time I’ve highlighted the differences:
The items inside the yellow boxes on the “normal” version aren’t shown at all on the testing version, and one (the number of search results and time it took to load the page) section is replaced by a new navigation menu. Essentially the change seems to be a bout freeing up space to the left of the search results by moving the navigation to just above the search results, and dropping a couple of items. One of those items, though, is actually kind of important – location settings. If Google incorrectly autodetects your location from your IP address, as they often do here (they think we’re in MA), you’re not going to get relevant local search results. The “Show search tools” option is also dropped, but I don’t think that’s really a big deal. Some potentially useful tools there, but I doubt very many people use them.
Have you ever seen the test version? This was the first time I had so I grabbed the screenshot quickly. I know, from past tests, that they can be gone the second you close your browser and the only way you might ever see them again is if Google at some point decide to fully roll out a change. I’m reminded of the time less than 2 years ago that I would randomly see URLs showing up above meta descriptions in search results but no one else in the office did. Weeks later, Google rolled out the change for everyone – all of the time.
What’s particularly interesting with this test is that it’s been going on for months. As I said, I hadn’t seen it before, but Barry Schwartz first wrote about it back in June. Last week Schwartz confirmed that others were still seeing the test, but follow that first link to his June post where he includes a mock-up of what he theorized Google may be up to. He’s war gaming the idea that Google may be moving ads from the right to the would-be newly created white space in the left column. Not a bad theory, but it looks very cluttered to me which could turn off even faithful users. I know Google is in business to make money, and the Adsense heat map shows higher click-through rates for ads in left columns, I’m not sure they’re willing to risk a mass exodus of users with a change like this, which would potentially cancel out the increased revenue they might have expected.
Other than my point about the location settings disappearing I’d say my only complaint is that it seems odd to have 2 sets of horizontal navigation. See the black bar at the top? Some of the items overlap (images, maps, news) which is obviously a complete waste of space, and personally I’m actually drawn more to the black bar items. Seems to me that perhaps they’re going to phase out the “new” white background part they’re testing, but this would be a gradual move. Drop those items from the sidebar, but preserve them just above search results to ease people in to the idea, and then drop them totally. Not sure what they’ll do with the space to the left. Maybe the search results themselves will just be slid over? Anyway, not a theory I’ve really worked on, but I’d lean more in that direction at this point.
What do you think? Any other theories as to what this change is all about? Let us know in the comments below…