Have you ever desired to avoid Google from indexing a particular URL on your world-wide-web web page and displaying it in their lookup engine results webpages (SERPs)? If you control website web sites extended more than enough, a day will likely occur when you want to know how to do this.
The 3 methods most frequently used to reduce the indexing of a URL by Google are as follows:
Employing the rel=”nofollow” attribute on all anchor components made use of to website link to the site to avert the backlinks from being followed by the crawler.
Applying a disallow directive in the site’s robots.txt file to prevent the web page from remaining crawled and indexed.
Using the meta robots tag with the written content=”noindex” attribute to avoid the website page from being indexed.
Though the distinctions in the a few ways surface to be delicate at to start with look, the success can vary greatly depending on which strategy you opt for.
Employing rel=”nofollow” to prevent Google indexing
Quite a few inexperienced website owners try to avoid Google from indexing a particular URL by using the rel=”nofollow” attribute on HTML anchor features. They include the attribute to every single anchor factor on their site employed to url to that URL.
Such as a rel=”nofollow” attribute on a backlink stops Google’s crawler from adhering to the link which, in flip, helps prevent them from identifying, crawling, and indexing the concentrate on website page. Even though this strategy could possibly perform as a small-time period option, it is not a viable lengthy-phrase resolution.
The flaw with this method is that it assumes all inbound backlinks to the URL will consist of a rel=”nofollow” attribute. The webmaster, having said that, has no way to stop other website internet sites from linking to the URL with a followed connection. If you beloved this report and you would like to acquire extra data about google serp data kindly check out the website. So the chances that the URL will eventually get crawled and indexed working with this technique is pretty higher.
Working with robots.txt to protect against Google indexing
Yet another frequent approach employed to prevent the indexing of a URL by Google is to use the robots.txt file. A disallow directive can be extra to the robots.txt file for the URL in issue. Google’s crawler will honor the directive which will stop the website page from becoming crawled and indexed. In some instances, even so, the URL can nonetheless seem in the SERPs.
Often Google will display a URL in their SERPs while they have by no means indexed the contents of that webpage. If enough web internet sites website link to the URL then Google can often infer the topic of the website page from the backlink text of people inbound back links. As a outcome they will exhibit the URL in the SERPs for linked searches. Though making use of a disallow directive in the robots.txt file will stop Google from crawling and indexing a URL, it does not warranty that the URL will never surface in the SERPs.
Using the meta robots tag to avert Google indexing
If you need to avoid Google from indexing a URL while also preventing that URL from becoming displayed in the SERPs then the most productive method is to use a meta robots tag with a written content=”noindex” attribute in the head factor of the net web page. Of training course, for Google to truly see this meta robots tag they will need to initial be capable to uncover and crawl the web site, so do not block the URL with robots.txt. When Google crawls the webpage and discovers the meta robots noindex tag, they will flag the URL so that it will never ever be revealed in the SERPs. This is the most efficient way to prevent Google from indexing a URL and exhibiting it in their research success.