content

Avoid having duplicate content

Avoid having duplicate content on your own website and why that’s important. We also explain how your content can become duplicated (copied) on other websites and the variety of causes for it, ranging from accidental to downright malicious. Because you want to protect your original website content and prevent duplication as much as possible, we list various sources of duplicate content and give you recommendations for how to deal with each type of situation.

Duplicate content refers to text that is repeated on more than one web page either on your site or on other sites. Some duplication is natural and not a problem. For example, if you write about someone’s article, you naturally include quotes from that piece; if you revise your terms of service, you may need to keep both the old and new versions alive on your site; and so forth. But beyond those kinds of minor duplication’s, you need to vigilantly avoid duplicate content — not only because you want your pages to each provide unique value to users and be competitive in the search engines, but also because too much duplication can get your site filtered out of results or even penalized.

When search engine spiders crawl and index sites into a searchable database, they can detect that a page on any website is a copy of another page — on another site or even your own. The spider then determines which page is the original or most authoritative version to show in search results. The grouping of duplicate content could hurt your rankings because duplicate pages won’t make the cut. It’s not that the copycat page is penalized in the search engine’s index (the searchable database of website content). The search engines just cluster similar results together and filter duplicates out of search

results pages, because the engines don’t want to give users redundant listings. When your own site has duplicated pages, other websites link indiscriminately to whichever form of the content they like best, which dilutes your link equity (the value of all the inbound links pointing to your web page) by splitting links across several duplicated pages. Being filtered out of search results as a duplicate is bad for your SEO, but getting a penalty is worse. Sites with widespread duplication may earn a search engine penalty for having thin, low‐quality content — something Google’s Panda update tries to remove from search results. Worse yet, a site with mostly duplicated content could be seen as spam and even removed from the index. So although a little duplication might not hurt you, too much could sink your SEO ship.

on your own website and why that’s important. We also explain how your content can become duplicated (copied) on other websites and the variety of causes for it, ranging from accidental to downright malicious. Because you want to protect your original website content and prevent duplication as much as possible, we list various sources of duplicate content and give you recommendations for how to deal with each type of situation.

Duplicate content refers to text that is repeated on more than one web page either on your site or on other sites. Some duplication is natural and not a problem. For example, if you write about someone’s article, you naturally include quotes from that piece; if you revise your terms of service, you may need to keep both the old and new versions alive on your site; and so forth. But beyond those kinds of minor duplication’s, you need to vigilantly avoid duplicate content — not only because you want your pages to each provide unique value to users and be competitive in the search engines, but also because too much duplication can get your site filtered out of results or even penalized.

When search engine spiders crawl and index sites into a searchable database, they can detect that a page on any website is a copy of another page — on another site or even your own. The spider then determines which page is the original or most authoritative version to show in search results. The grouping of duplicate content could hurt your rankings because duplicate pages won’t make the cut. It’s not that the copycat page is penalized in the search engine’s index (the searchable database of website content).

The search engines just cluster similar results together and filter duplicates out of searchresults pages, because the engines don’t want to give users redundant listings. When your own site has duplicated pages, other websites link indiscriminately to whichever form of the content they like best, which dilutes your link equity (the value of all the inbound links pointing to your web page) by splitting links across several duplicated pages. Being filtered out of search results as a duplicate is bad for your SEO, but getting a penalty is worse. Sites with widespread duplication may earn a search engine penalty for having thin, low‐quality content — something Google’s Panda update tries to remove from search results. Worse yet, a site with mostly duplicated content could be seen as spam and even removed from the index. So although a little duplication might not hurt you, too much could sink your SEO ship.

Related Posts