that probably would be almost useless to users. In other words, it’s a slimy trick to gain search
engine ranking while providing users with a nice site to look at.
It starts with content cloaking, which is accomplished by creating web-site code that can detect and
differentiate a crawler from a site user. When the crawler enters the site, it is re-directed to another
web site that has been optimized for high search engine results. The problem with trying to gain
higher search results this way is that many search engines can now spot it. As soon as they find
that a web page uses such a cloaking method, the page is delisted from the search index and not
included in the results.
Many less-than-savory SEO administrators will use this tactic on throw-away sites. They know the
site won’t be around for long anyway (usually because of some illegal activity), so they use domain
cloaking to garner as much web site traffic as possible before the site is taken down or delisted.
When you’re putting together a web site, the content for that site often presents one of the greatest
challenges, especially if it’s a site that includes hundreds of pages. Many people opt to purchase bits
of content, or even scrape content from other web sites to help populate their own. These shortcuts
can cause real issues with search engines.
Say your web site is about some form of marketing. It’s very easy to surf around the Web and find
hundreds (or even thousands) of web sites from which you can pull free, permission-granted con-
tent to include on your web site. The problem is that every other person or company creating a web
site could be doing the same thing. And the result? A single article on a topic appears on hundreds
of web sites — and users aren’t finding anything new if they search for the topic and every site has
the same article.
To help combat this type of content generation, some search engines now include as part of their
search algorithm a method to measure how fresh site content is. If the crawler examines your site
and finds that much of your content is also on hundreds of other web sites, you run the risk of
either ranking low or being delisted from the search engine’s indexing database.
Some search engines now look for four types of duplicate content:
Highly distributed articles.
These are the free articles that seem to appear on every single
web site about a given topic. This content has usually been provided by a marketing-savvy
entrepreneur as a way to gain attention for his or her project or passion. But no matter
how valuable the information, if it appears on hundreds of sites, it will be deemed dupli-
cate and that will reduce your chances of being listed high in the search result rankings.
Product descriptions for e-commerce stores.
The product descriptions included on
nearly all web pages are not included in search engine results. Product descriptions can
be very small and depending on how many products you’re offering, there could be thou-
sands of them. Crawlers are designed to skip over most product descriptions. Otherwise,
a crawler might never be able to work completely through your site.
Building Your Site for SEO
03 1 9:33 55
Bitcoin Gambling - The Original Crypto Dice Game