Friday, January 30, 2009

Real black hat secrets unleashed and how to avoid it

SEO techniques are classified by some into two broad categories:

1. White hat SEO: They are techniques that search engines recommend as part of good design and will give lasting results.

2. Black hat SEO: These are those techniques that search engines do not approve of and attempt to minimize the effect of, referred to as spamdexing.

Cloaking: Cloaking or Masking is the SEO process of delivering one version of a page to a user, and a different version to another user such as a search engine. We can custom build a page for the user. If the user is a search engine, we want to give it our best most optimized stuff. If it is a user, we want to give it a pretty page that is tricked out for navigation and usability. Why Cloak? Often when you get a high ranking page under quality keywords the first thing that will happen is your page gets stolen (called Page Jacking)in order to reduce your website rankings.

There are 2 types of cloaking or page swapping:
1. User Agent Cloaking (UA Cloaking)
2. IP Agent Cloaking (IP Cloaking)

User Agent Cloaking
When a visitor (a search engine or a human) requests a page, the cloaking script compares the User Agent text string with its list of search engine User Agent names and then sends the appropriate page. Now, usually if a search engine spider requests a page, the User-Agent variable contains the name of the search engine. Thus, a page designed for the search engines is delivered. But, if the cloaking script does not detect the name of a search engine in the User-Agent variable, it assumes that the request has been made by a human being and delivers the page which was designed for human beings.

IP Agent Cloaking
This is the more complicated method to do cloaking. It involves IP address of search engine spiders. When a visitor (a search engine or a human) requests a page, the cloaking script checks the IP address of the visitor. If the IP address is present in the IP database, the cloaking script knows that the visitor is a search engine and delivers the page optimized for that search engine. And if the IP address is not present in the IP database,
the cloaking script assumes that a human has requested the page, and delivers the page
that is meant for human visitors.

But, you may now ask, how can a Search Engine Detect Page Cloaking?

There are 3 ways, how a search engine detects Page Cloaking or Code swapping. They are:

1. User Agent cloaking method: When a search engine sends a spider to a website without reporting the name of the search engine in the user agent variable. Now, If search engine finds dissimilar pages – one that is accessed by a spider which reports the search engine name and the other spider which doesn’t report, then it is likely to be considered as a page cloaking.

2. IP Based cloaking method: When a search engine sends a spider to a website from a different IP address than the one that the spider has been continuously using so far, it becomes a totally new IP address for that spider which is not there in the database. So, when the page which is delivered to a spider having a known IP address is detected different than the page delivered to the spider with new IP address, then search engine knows that the page cloaking has been used by a site.

3. Human representative: When a human is assigned by search engine to visit a site in order to find out if the site has used cloaking. Now, if he finds that the page delivered to a search engine spider is totally different than the page he has viewed, then he identifies that the site has used cloaking.

Reverse Cloaking:
The DNS stores and relates many types of information with domain names, but most
importantly, it translates domain names (computer hostnames) to IP addresses. At each and every step, the program queries a corresponding DNS server to provide a pointer to the next server which it should consult. However, if that pointer (PTR) is blank or misdirected, search engines-often-regard that site as cloaked.

Off late it was found that many websites were not been seen in the search engine results or were disappearing after a website migrates to a new server. After analysis, it was found that search engines could not resolve Domain Name Systems (DNS) with Internet Protocol (IP) addresses. That’s where IP cloaking commonly begins.

If the DNS server points to 2 or more of your websites which are hosted on the same IP
address, it is called reverse IP cloaking. It is a black hat technique. Why would anyone host a multiple number of websites on the same IP address?
Because, to gain more back links for all those websites. As, most of the web pages of those the websites in this case are interlinked to each other to get more number of incoming links. So, this is considered to be a negative factor in terms of SEO.

For viewing how black hat cloaking works in SEO,
Click here

So what to do to prevent such cases?

We should host multiple number of our websites in different IP addresses each, and then we can link a few number of pages of each of our site to each other. It is a valid technique in terms of SEO as the search engine treats each of the different IP address as a separate legal entity. So, your websites will now not be penalized as they are hosted on different IP addresses.

So, do not try to cheat search engines by using any of these black hat tactics. If you want lasting results, you should concentrate only on ethical search engine optimization
Seo india, Seo company Hyderabad, software application company

No comments:

Post a Comment