The Scary Panda Makes Webmasters do Google’s Job
It’s been already 3 months since Google began rolling out the Panda algorithm to filter content farms from the search results and they hit literally tens of thousands of unique content websites, while giving an advantage to scrapers and content aggregators. And this is exactly the opposite of what Google’s Panda should have done. It was supposed to filter all low quality and duplicate content from the search results. So either Google is doing it all wrong, aggregators provide higher quality content than the original sources or Google designed this algorithm especially to scare webmasters.
So here’s how I see the whole situation. I don’t believe Google is not able to see who’s the original source and who just scrapes others’ content, because they were able to do it before Panda. Also it’s impossible to consider the scrapers and aggregators deserve to be placed ahead of the original sources. So maybe Google did this to make webmasters clean their websites, remove duplicate pages, quit spamming and improve content quality. They can keep this algorithm up for a few months until most of the websites are cleaned up and increase quality, so after that it will be much easier for Google to filter the results and become more relevant.
If this proves to be true, Google will lose all it’s credibility and it already started to, because legit businesses with quality content who relied on Google’s traffic can go bankrupt just because the search engine is playing with them. And from a company that everyone loved, Google becomes more hated than Microsoft, which surprisingly does a really good job lately. And we can see this in the search engine market share too, Microsoft and Yahoo are growing while Google is at best stalling.
Fortunately Bing is really relevant and at least in the last 3 months it has been much more relevant than Google, the latter being full of spam in the search results. And users don’t like crappy spam sites, which will make them switch sides. And I am pretty sure Google knows that, though they hope people will come back once it becomes more relevant, though some of them will be satisfied with Bing and will stay with them, because Microsoft doesn’t screw them how you do, Google.
If you still haven’t lost your trust in Google, let me give you one more reason to. Panda filters duplicate and low quality content, right ? Here’s an example of a scraper outranking Google’s own blog:
So Google considers that topsy.com is more relevant than their own blog for their own content. And I could continue with examples like this all day long. All of Google’s blogs are in the same situation, which is pretty shameful.
For the last 3 months, every webmaster started looking for solution to please the scary Panda, but there isn’t any good one, unfortunately, because it seems like if you have unique quality content you are not good enough for Google. Most scrapers copy the posts from websites’ RSS feeds, so a lot of webmasters shortened the feed to just 30-50 words with a backlink to the website and guess what. Scrapers who copy just that 30-50 words and also link to the original website still outrank it. It doesn’t matter the original website has a comprehensive 500 – 2000 words article, it’s still not as good as the short excerpt the scrapers copy. So no, there’s nothing you can do about it. The only one who can help you is Google and it will eventually restore the old algorithm, but nobody knows when or what they are going to do further.
If you still don’t believe Google does this on purpose, then please read their latest update on the Panda algorithm. There are some criteria websites must meet in order to please Google’s bot. And I wasn’t able to find a single one that scrapers or aggregators meet, though they still outrank the original sources.
Maybe you think I’m just another idiot that wants to begin a conspiratorial theory and if you do, please read the post again and try to find a better reason for what’s happening right now.