May 22nd, 2012.
There has been a lot of discussion around the search marketing industry over the past few weeks thanks to what many consider to be a pretty major update released by Google. There has been a lot of speculation that has followed with some good and not-so-good advice as a result.
With all of this information floating about it’s difficult for anyone without their ‘ear to the ground’ to get a concrete understanding of exactly what ‘Penguin’ is, and what the effects have been. I’ll put the speculation to one side for the moment and start with the facts:
What is it?
Google’s latest update aimed at rewarding high-quality sites in search results by targeting and demoting sites appearing ‘overly optimised’. Some sites that have used or are continuing to use outdated tactics (specifically tactics to get other websites to link to theirs for the purposes of improving rankings in search results) have been affected by this, however there are reports of websites that have never engaged in such tactics being affected by the update as well.
When did this happen?
Google released a blog post stating that the update would roll out “in the next few days” back on 24th April- almost one month ago at time of writing. Most sites affected by this will have noticed changes around 24th onwards.
How to I tell if I was affected?
Sites affected by the update will probably notice a change in rankings and visits from organic search traffic (specifically visits from Google) around this time. If using Google Analytics you should be able to tell by navigating to ‘Traffic Sources’->’Sources’->’Search’->’Organic’, making sure you have a date range that spans a few weeks before and after this date. To be sure it’s best to limit the data you are viewing to Google only. Look for ‘Primary Dimenson’ and click ‘Source’ next to it to give you a list of organic search sources, and click on ‘google’:
The example above shows a drop in visits from organic search (specifically from Google)- if you see a consistent increase in visits around this time it is likely that a competitor may have been affected and your site may have improved in rankings as a result.
OK it looks like my site has been affected- What else do I need to know?
1- You’re not alone-
thousands of sites have been affected by this update- some undeservingly so (to the point where Google has created a feedback form for sites that don’t believe should have been affected by the update)
2- Penguin is an algorithmic update- it isn’t personal.
Google has identified your site as being within this ‘category’ based on the data it has, not due to a human reviewing your site personally.
3- Reconsideration requests won’t help-
“Because this is an algorithmic change, Google has no plans to make manual exceptions. Webmasters cannot ask for reconsideration of their site, but we’re happy to hear feedback about the change on our webmaster forum.”
4- Noone that has been affected by Penguin has recovered… yet-
There is a wealth of speculation and tips for recovering from the penguin update online, however noone can confirm what the best solution to recovering from this update is. Currently there has been no ‘refresh’ or ‘reevaluation’- sites that were affected are still in the same boat.
5- Penguin isn’t ‘real-time’-
Like the ‘Panda’ updates before, the Penguin update isn’t continually reevaluated in real-time, meaning any changes that are made now won’t have any impact until Google reevaluates their data at a later date.
How can I get my traffic and rankings back?
The only certain answer at this stage is no-one can be 100% sure (as with pretty much anything within the SEO sphere), but the potential signs of redemption lie in evaluating the existing links to your website and the methods used to attract links from external websites.
Microsite Masters released some interesting findings of sites they analysed that had been affected by the Penguin update:
“every single site we looked at which got negatively hit by the Penguin Update had a “money keyword” as its anchor text for over 60% of its incoming links. On the other hand, the sites that were not hit by the update had much more random percentages.”
This suggests that sites with a higher percentage of links that use the keyword they are trying to rank for (‘money terms’) in the clickable part of the link to their website (‘anchor text’) are more likely to have been affected by this update. This isn’t a ‘one size fits all’ issue, and I’m certain that Google would have considered several other factors rather than the percentage of keyword-rich links a site has, but suggests that Google are looking for more evidence of brand promotion rather than search engine manipulation when assessing the links to your website.
As with other large updates introduced by Google in the past, this re-emphasises the importance of diversifying the sources of income your business as a whole has. Depending on one revenue channel alone can be risky- even when times are good, so it’s important to remember that channels such as paid search, email marketing, online PR, affiliate marketing and social can be profitable.
img credit: opencage.info
So, what’s the problem?
Nothing, if you haven’t been massively over-zealous about how well optimised your website is. Being vigilant and up to date isn’t a problem, the issue Google is trying to fix relates to those link-fiends who have over-used their ‘white hat’ so much so, that is has turned a miserable shade of grey (In case you’re confused, I refer to this post).
Okay, so what is ‘over-opimisation’?
In a nutshell, it’s the act of doing everything that is possible to optimise your website, in a non-human and bot-like way.
Sure, over optimisation can include (and will probably be identified by inclusion of ) any of the following:
- Scraped, copied web content
- Too many ads on the page & not enough original content and copy
- That fact that your website loads faster than the speed of light
- When all links that are inbound and have identical anchor text
- Infinite forum links
- Hidden text (in a colour that matches the background, so it can’t be seen)
- Sites linking to you that are dodgy or malicious in any way
This list is not exhaustive as there are many more examples of things Google might suspect & then penalize you for.
Below, I’ve included a helpful video from SEOMoz’s very own Rand Fishkin that does well to explain what changes should be made to save your site from dropping in the ranks and possibly fading into obscurity online after Google’s next update:
March 7th, 2012.
Barry from Search Engine Roundtable posted an interesting find from a Google Webmaster Central forums post. The OP pointed out that PC World (a leading electronics chain in the UK) is ranking with “Mothercare” (a leading baby/parenting chain in the UK) as it’s title in search results for the term ‘PC World teeside park’:
I’m still very intrigued as to how this happened, but after some digging around I think I’ve found a reason why (which I posted on Barry’s post).
1- It’s showing up for ‘mothercare teeside park’ as well (suggesting it’s not ‘one way’). Both results show a Google Places result with the same address and a phone number: 01642 618325
2- A quick search for ’01642 618325 pc world’ returns http://uk.wowcity.com/hartlepool/?what=digital+camera+consumer+products
3- On this page the first result for Mothercare links through to PC World’s homepage (although the details are correct for Mothercare). Note this passes through an internal tracking script and isn’t a direct link.
This looks to me like an error in Wowcity’s listing as the cause of the problem, and probably isn’t anything to do with the folks at PC World or Mothercare (or the agencies they may be working with), but is an interesting fine nonetheless.
If my theory is correct it begs the question- Does Google Places trust it’s citation sources too much? Would love to hear your comments (particularly if you work for PC World, Mothercare and Wowcity!) below.
December 6th, 2011.
A few weeks ago we asked a few folks on Twitter to complete a short (okay, maybe not that short) 22 question survey, looking specifically at the business side to working in SEO. We asked the all important questions, including:
- Where are you based?
- What kind of business are you?
- How many people work in the business?
- What other services do you offer besides SEO?
- How many clients do you currently manage?
- Do you contract your clients for a set period of time?
- What is your usual client contract arrangement (i.e. how do you charge for your work)?
- Your average charge per month for SEO services?
- Typical client retention period?
- Biggest issues facing your business today?
- Biggest barrier to sales?
- Biggest source of leads?
- What activities are included in a typical campaign?
- Link building tactics- what tactics do you employ for the majority of your campaigns?
- Do you buy links? (what SEO survey would be complete without this question? )
- What 3rd party tools do you subscribe to?
- What keyword tools do you use primarily?
- How long on average do you spend reporting to a single client?
- What metrics do you include in your standard reports?
- How did you get into SEO?
- What skills do you consider to be the most important skills for an SEO?
- Have you ever had a site penalised?
The results of the survey are pretty interesting- take a look for yourself below:
We’ll be releasing the source data as promised in the next few days. Let us know how your company compares to these averages in the comments below!
October 24th, 2011.
Google claim that 16% of more than a billion queries entered every day have never been seen before may sound hard to believe, but perhaps a closer look at how people search online is warranted first. 450 billion new, unique queries have been handled by Google since 2003. All of this begs the question what are users doing that results in such a large number of new and unique queries each day?
Firstly we need to look at how people actually use search engines. In their early experiences with search portals users tend to put in short, generic terms into the search engine. As users become more skilled in searching for the items or information that they want, their search terms become more specific and descriptive.
Instead of using short, generic keywords when searching for a pair of shoes for instance, the user might be inclined to be more descriptive of the type of shoes they are looking for using far more adjectives, e.g. light brown, leather, high heeled ladies court shoes, in the hope that it would be more specific to get exactly what they want.
It is also worth considering the search buying cycle as this especially impacts upon conversions.
Firstly think about how you yourself might behave online when you’re researching buying a product.
Taking a typical online purchase for something like a television. You might start with a search query for a very general phrase like TV or television. You’ll see that there are several irrelevant results for our purpose such as the BBC and ITV results, but using the informational properties such as Wikipedia, or the Google shopping results you may then make a decision that you’re looking for a plasma TV rather than an LCD TV.
Of course you may also decide to visit one of the commercial websites listed for these queries, or buy from the PPC listings, but it’s more likely you’ll want to research a bit more first.
Next you’ll probably search for Plasma TV, this is looking a bit more promising, there are several relevant shopping results some reviews websites and a few more relevant commercial sites appearing. After reading a few of the sites you decide that the Panasonic 50PZ800B looks fairly impressive and you want to find out a bit more about it.
Of course you search for it, possibly adding terms like review, test or comparison to bring up the more informational resources.
It’s about now that you feel you’re happy with your choice, you have compared it against other makes and models, you’re happy that it’s what you’re looking for and you want to go ahead and purchase.
To find online shops selling that specific model you may use buying trigger search terms such as buy or cheap, or possibly even adding geographic search terms such as London or UK.
As a site owner you need to be prepared to be targeting as many of these longer tail phrases as you can with your main site, no easy task when you don’t even know what they are!
Try to develop good (great) content on your site, category and product pages warrant special attention for this. Getting this right will result in high levels of targeted, focused, converting visitors.
October 20th, 2011.
Trusted Stores is an ecommerce certification program that Google launched early in October. The idea behind the program is that it will give people more assurance in buying from online retailers. At the moment the program is still in beta those ecommerce stores that attain Google qualification will be able to add a badge to their site, proclaiming them a Google trusted store. The program is backed, more interestingly, with a consumer purchase protection package worth $1,000.
Those retailers interested in applying to become a trusted store will need to furnish Google with certain consumer information as the company is of the opinion that retailer’s data is more trustworthy than customer surveys. In order to qualify for the Trusted Stores status internet retailers will need to demonstrate good customer service and a record of shipping goods on time. In terms of customer service retailers must have evidence of resolving any customer issues and disputes in a timely manner.
When customers move their mouse over the Trusted Stores badge, they will see the store’s customer service and shipping grades. Unlike the Google Checkout the company states, there is no connection between the new program and Google Adwords. Google further reiterated that the program is still in its early stages and too soon to speculate on how the program might be enhanced and expanded.
With respect to the purchase protection package mentioned earlier, it appears to work in a similar way to credit card companies that extend manufacturer’s purchase warranties. Google however, does not offer guarantees rather the $1,000 is potentially money back where retailers fail to resolve problems. The customer can only benefit from this package if they have chosen the free purchase protection option. The consumer should contact the retailer first where there is a problem, if this is not resolved, then the customer can call on Google to deal with it, or be able to claim money back. The fact is that Google is capable of getting retailers to find quick problem resolutions.
While Google have stated that their motive for introducing the program was to increase buyer confidence in online retailers, some may suspect the company of having hidden motives. Notions of a future tie in with Checkout or Adwords are at the moment, pure speculation. As yet it’s unclear precisely what data Google will be capturing, but if customers choose the personal protection, the retailer is more likely to have a record of the transactions.
August 25th, 2011.
I’ve been playing about a bit with Google Plus posts this morning, and with the recent share of Vic Gundotra’s Icon Ambulance post I know a lot of people have been viewing the same page that led me to dig a little deeper into Google Plus pages.
Take a look at the source code of the cached version of this page- scroll down and you’ll notice a lot of names appearing in the source code within the <span class=”To”> tag. This tag appears to contain the names of almost everyone who has shared the post, and in this particular case this is a lot of names. On the page this either appears as:
or in some cases:
I’m not yet able to determine why some pages do display some of this text and why others don’t- it doesn’t appear to be influenced by the number of shares, comments or age of post from what I’ve seen. In any case this still contains a list of names hidden from the page:
In order to determine whether Google Plus pages were ranking for people’s names included in the hidden text I decided to run a small experiment. I took this Google Plus post from Matt Cutts and decided to check the rankings of the first 2 pages of Google UK for 38 of the names included in this span tag:
Out of the 38 names I tested for this URL only 2 ranked this URL within the top 20 results. This isn’t a massive feat but I’m sure we’d see more results if we rolled this out across the thousands of post URLs indexed, or expanded the depth past the second page of search results.
This goes to show that the usernames contained in the hidden text can (and does) rank which may be a violation of Google’s Guidelines on Hidden text and links.
Now I’m 100% positive that this isn’t deliberate- I think this is simply a classic case of a developers oversight… another classic example of why SEO needs to be baked into the development process from the very beginning- no matter how big an organisation you are!
…a wheelbarrow in an open field that you drag along every day filling it with this and that – each thing you add to it has some significance and some use.
Now imagine you never empty the wheelbarrow. Each day, not only do the things you found the week before now lie at the bottom covered by the newest additions, but the device also becomes increasingly heavy to pull until eventually, it becomes almost impossible.
Now think of the wheelbarrow as your website, and think of its contents as the factors affecting its speed – Let’s explore these factors…
- Empty spaces between code (This only adds to processing time)
- Missing tags (Causing internal errors & bugs in the site)
- Bulky HTML (such as using unnecessary tags where something more CSS compatible would work better e.g. using the tag “font-size” rather than just “small”)
- Background colour being the same as text colour (making all text unreadable)
- Hyperlinks that fail (Devaluing your site in terms of credibility, and possibly increasing bounce rates)
- Missing images
An overload of HTTP requests:
Whenever your web browser fetches a file from a web server, for example when it loads a picture, it does this by using HTTP which stands for “HyperText Transfer Protocol”.
HTTP is an action whereby you’re computer requests for a particular file. One example is a request for ‘home.html‘ (the homepage of a particular website). The web server then sends a response to the computer that says something like: “Here’s the file you asked for” which is followed by the actual file itself.
Understandably, if your server is receiving a very high volume of requests for a range of different things, such as pictures, graphics, photographs, music players and video rendering, it can take its toll and end up really slowing your website down.
Too many cookies:
HTTP Cookies are used mainly for personalization and authentication purposes. A series of saved information is exchanged between the web server and the browser in order to remember things about how you are using the internet. For example if you are shopping online and exit the website returning at a later date, a cookie will enable the site to remember what you had in your shopping cart so you don’t have to spend time finding the same items again.
Web hosting is the business of providing storage space and access for websites. Bad web hosting happens when said storage space is overloaded with many websites, yours is added to the list and so runs slow. Other issues caused by a bad web host include:
- Search engines being unable to crawl your site resulting in a fall in Search Rank
- Your website being “down” (not working, sending out 404-errors)
- Not being able to contact your web host to fix the issue (since the service is so bad the system has probably crashed)
Excess of external media:
Embedded YouTube videos, actually embedded anything that is coming from another website can potentially slow yours down. When you embed something from another site, you are relying on that sites web server, that sites speed, and that sites ability to ensure the embedded item is working properly there, so that it works properly on yours site. Often, even when it works just fine, it might add an extra few seconds to a certain page loading…a few seconds a potential customer may be unwilling to wait!
Spam is so much more than just a bunch of annoying emails. It slows down the Internet and it increases consumer fees.
The internet is a network where spamming effects everyone that uses it. To push spam around the internet relies on a process; it begins with global networks that pass the spam along to their destination, and ends with the message being received by the recipient.
Simultaneously, time, money and resources are used trying to catch and prevent spammers from infiltrating mail servers resulting in higher costs to the consumer because providers are forced to add more security to their servers and hire more staff to manage and prevent the problem.
Be sure to spam proof all web forms by adding “captchas” or similar.
A ‘favicon’ is an image (as shown above) that stays in the root of your server. It’s definitely needed because even if you don’t care about them, the browser still requests one. If there isn’t one, it will respond with a 404 error (meaning not found). Any error message, such as a 404 or 301, is an extra message sent that adds time to the processing of a site.
This image or lack thereof, interferes with the processing sequence by requesting extra components in the load, and since the favicon is the first thing that is downloaded before these extra components, if there isn’t one, the first thing downloaded will be an error.
Too many advertisements:
Any time a site uses advertisements, you are adding to other processes a site goes through in order to function correctly. Programmes like Google Adsense and Microsoft adcenter are external, and reputable, however it is logical to practice the same rules as with external media; everything in moderation – besides, sites with too many ads look un”site”ly!
If any of these apply to you, take active steps to protect your website against sloth! Speed be with you!