Something I have been wondering about when using the tag and possibly adding a no follow robot.txt in it. Would this be possible? I want to create a small or low code static page for testing.
I know that indexing isn’t really an issue, just wondering if the syntax and code would work since it is where SEO thrives about a page and all the other metadata.
If not, that is ok. Had this and been wondering about this for a while.
When it comes to adding a “no follow” directive, it’s typically used in anchor tags (links) to indicate to search engines not to follow that particular link. However, if you want to control indexing for the entire page, you’d usually use the “robots meta tag” in the head section of your HTML.
From what I know, adding “nofollow” to links in your HTML tells search engines not to follow that link. It’s handy when you want to prevent passing link juice or if you’re linking to something you don’t want to endorse for SEO reasons.
…which for external links would be a faux pas. We always want to pass link juice to external links, else why have the link at all?
If our site is constantly linking to Wikipedia, it could look like we’re spamming the site with excess traffic. In this sense we could tone it down by keeping a catalog of all the links to Wikipedia, and which ones are duplicates. Those are the ones to which one would add the nofollow directive. The duplicates. Each URL on Wikipedia linked from our site only requires one followable link to get juice from our site. More than one is spamming. Any excess would be considered link bombing. Search engines see this as plain as day. That’s one thing we want to control.
“nofollow” can also help us reduce the number of internal links that are followable. Every URL on our site should have at least one followable link to it for discovery and crawling purposes. If we add new content, the most prominent link should be followable, and all others nofollow. We want spiders to follow our navigation, and links to new content, but we don’t want them to follow links from other pages in the same way, owing that we have probably added those links via a dynamic template. We can boilerplate ‘nofollow’ into those templates and thus ensure that only the most prominent link to the new content ever gets followed. In the end we have a site that is super easy to crawl and index. SEO is one thing; SE friendliness is wholly another.
Great question! Adding a noindex directive in your robots.txt file can indeed help control whether search engines index a page. For testing a static page, it’s perfectly fine to use this approach. I’ve done something similar when I was experimenting with different SEO setups on my site.