Technical SEO Case Study: Why Do Website Rankings Keep Dropping?

EmeraldHike

Active member
Let me guess - this is your situation: Your website experienced a ranking drop and you made some changes to it in an effort to raise them back up again. The problem is, they kept on dropping. Lower and lower and lower. It seems like everything you've done to get the rebound you're after has failed. So you keep on trying. And everything you do seems to make your situation worse. I feel your pain. Why? Because I've been there a thousand times. I'm with you.

I'm here today to give you some advice. The advice is to stay away from your website for a while. Any change you make to your site in an effort to increase your search engine rankings will take months to see in regards to results. If you make a change and then wait a week and then make another one and then repeat that process over and over again, you're digging your own grave. You need to make a rational change and then walk away. I can remember one website I owned a while back dropped in ranking. I thought I knew what it was, so I altered some settings. Well, that only lasted for a few weeks because I became impatient. So I altered them back. And I kept on messing with things and I think Google got sick of me because they stopped crawling my site for a while. Well, they didn't stop completely, but they did slow way down. They obviously didn't like what I was doing. Looking back, I wish I had just corrected what I perceived as the error and then left things along. You know, go outside for a while (months) to get some fresh air. Find a hobby. So something other than sit at the computer waiting for a bump in the search engine rankings.

Now, here's the thing - when you're up against a drop like this, you really do need to do some analysis to find out what went wrong. You can't just go changing things willy nilly. Search engines like stability and predictability. They don't enjoy crawling and indexing something that's going to be changing all day long. So if you update your robots.txt file, upload it and leave it along. Actually, when it comes to that file, you should set it once and then forget it for pretty much forever. Trust me, I know this from experience. That file is such a pain to deal with because any update to it takes forever to take effect.

Let me give you a real life example of what I'm talking about here. Call this a search engine optimization or SEO case study, if you will. It'll be short, don't worry.

A lot of websites allow users to register for accounts. Once the user is registered, they become a member. Classifieds, forums, ecommerce sites, and blogs sometimes allow people to register. The sites oftentimes offer names and links to these account, which are usually empty thin content pages. I'm sure you've heard about this. They've been a big problem for a long time and these pages are part of why Google Panda was released. To cut down on junk pages that are sometimes filled with spam and are sometimes left empty forever. Every so often, member pages are used the way they're intended to be used, but that's a rarity.

Because these member pages are so thin, webmaster rarely allow them to be crawled by search engines. Some do allow them to be crawled, but include a meta noindex tag to keep them from showing in search results. Some don't want them crawled at all to preserve what's referred to as "crawl budget." They'll block the directory in which these member pages are held. Some allow these pages to be crawled, but will require authentication to view them, meaning, the person (or crawler) will need to first log into their own account to progress further. Those authentication pages sometimes return a header of 403 Forbidden. This means, "Don't go any further. You're not allowed."

Whichever method a webmaster chooses is better than allowing an unhindered crawl. The last thing anyone wants is to have these pages in a search engine's index because they rarely help and almost always hurt. But here's the problem - the "case" if you will. Let's say that a webmaster launches a new classifieds website. Or a forum. It doesn't really matter. The website has links all over its pages that lead to these member accounts. In the beginning, the webmaster had these pages set so they return the 403 Forbidden status code. When the links were crawled by the search engines, none of them were indexed because there was nothing there to see. The search engine couldn't progress past the forbidden part. Months went by and the webmaster became impatient and wanted to see his or her website rank better. For some reason, they thought the 403 status codes were bad, so they decided to block the member directory in the robots.txt file. Months continued to go by and the number of pages that were blocked grew to the thousands. After all, there were many members. So what went from nothing grew to actual URLs that were noticed and indexed by Google and the other search engines. As you may be aware, having too many pages blocked in this file is no good. Google doesn't like that. Why? Well that's a story for another time. Just ask if you're interested.

After a few months of fairly static rankings, the website experienced a ranking drop. The reason for this, as the webmaster concluded, was because the number of pages that were blocked far outnumbered the number of pages that were actually indexed. Yes, it's true that pages that are blocked by the robots.txt file will eventually drop from the Google index, but that can sometimes take a very long time. In the meantime, Google will consider each member URL it crawls as an individual and live page. The shame about this type of situation is that the webmaster of a site like this will oftentimes venture out onto the internet and in SEO forums and ask all sorts of questions. They'll wonder why their rankings dropped and people will tell them that it's their content that's bad. That it's duplicate or it's thin. The actuality of this is that the content is fine and it's a technical SEO issue. The problem was because those member pages were blocked in the robots.txt file when they shouldn't have been. They should have been left alone to return the 403 Forbidden status code and the webmaster should simply have been patient while his or her site was growing through its beginning stages.

Be that as it may, the pages were blocked and the problem was discovered. So the webmaster in question reversed course and unblocked the pages so the search engines could once again crawl to see the error code and drop the pages from their indexes.

But wait - what happened when the crawling began? The website's rankings dropped even more. Why? Well, think about it. Those blocked pages had gained pagerank while they were being "crawled" or noticed. Not much pagerank, but enough to reduce the rankings of the entire site if all of the pages were "deleted" all at once. And herein lies the conundrum; how do you get rid of a bunch of lousy pages when doing so makes the negative effects you've been experiencing even worse? In this case, unblocking the pages so they're removed from the search engine's indexes was the right move to make. What's called for here is patience and intestinal fortitude. Things will undoubtedly get worse before they get better, but get better they will. The common course of action when webmasters notice the negative effects of their most recent "correction" is to undo that correction and a cycle begins. Ranking fall even more and the webmaster does something to harm them even more than that. Then they do and undo things for months until the website becomes worthless. This happens all the time. Remember, fewer pages in Google's index are better, especially when it comes to system generated pages. Contact, printer friendly, imagine only, newest ads or posts, member pages - all of these pages are thin and in no way should be included in the indexes of search engines. The trouble lies with trying to remove them. It hurts, but it's got to be done. So if you own or manage a website that's experienced a ranking drop, don't exacerbate the problem. Stop and think about what's going on. Then, don't just do blocking pages and directories in the robots.txt file. If you can, make it so the search engines believe the infringing pages are deleted. They need to be removed from the index completely, even if they aren't actually removed. Hide them behind authentication. Get rid of them. Do something, but don't flip back and forth between blocking them and not blocking them. That's no good.

Do you have any SEO related questions? If so, share them down below or begin a new thread and ask away. I'll be happy to answer. Thanks!
 
Top