last posts

How to utilize the meta robots tag effectively?

Robots meta directives, which are also referred to as robots meta tags, consist of small pieces of code that give instructions to robots regarding the crawling and indexing of a website's page content. In contrast to the recommendations offered in the robots.txt file, these tags provide more precise and explicit directions for how to crawl and index the content of a specific page.

What are these robots meta tags for?

Meta robots tags provide instructions to crawlers on how to crawl and index information found on a specific webpage. When discovered, these directives act as strong suggestions on the indexing behavior of crawlers. However, similar to instructions in the robots.txt file, crawlers are not obligated to follow these guidelines, and malicious bots may ignore them and access page content without permission.

It's important to note that meta robots tags should not be solely relied on as a security mechanism. For private information that should not be publicly available, using more secure approaches like password protection is recommended to prevent unauthorized access by visitors and crawlers.

How to utilize the meta robots tag effectively
How to utilize the meta robots tag effectively

The 2 types of robots meta tags

The concept being conveyed is that there exist two types of beacons:

  • Those which form a part of the HTML page, commonly referred to as "robots", and
  • Those which are transmitted by the web server via HTTP headers, commonly known as "x-robots-tag".

It is worth noting that both of these types of beacons can employ similar parameters such as "noindex" and "nofollow". The only variation lies in how these parameters are conveyed to robots.

meta robots tag

The robots meta tag is a crucial component of a webpage's HTML code, which is usually situated within the <head> section of the webpage. It is represented through code elements like:

<meta name="robots" content="[PARAMETER]">.

If you wish to provide specific instructions to a particular crawler, you can replace "robots" with the name of the desired user agent.

For example, if you want to give specific directives to Googlebot, you can use the following code:

<meta name="googlebot" content="[PARAMETER]">.

You can include multiple directives in a single meta tag, separated by commas, as long as they apply to the same crawler.

For instance, this code:

<meta name="robots" content="noimageindex, nofollow, nosnippet">

instructs robots not to index images on the page, avoid following any links, and not to display page excerpts in search results.

However, if you want to give different instructions to different search crawlers, separate tags are necessary to address each crawler.

X-robots beacon

The x-robots tag is a way to control how a page and its specific elements are indexed by search engines. It is part of the HTTP header and has more features and flexibility than the meta robots tag, which is used in the page's HTML code.

With the x-robots tag, you can use regular expressions, apply indexing rules to non-HTML files and set global parameters. To use it, you need to access your website's header.php, .htaccess , or server access file and add the x-robots-tag markup with your desired parameters for your server configuration. Some examples of what you can do with the x-robots tag are:

  1. Control how non-HTML content (like video) is indexed
  2. Prevent a specific element of a page (like an image or a video) from being indexed, but not the whole page.
  3. Manage indexing if you can't access or modify the HTML code of a page (especially the <head> section) or if your site uses a common header that is not editable.
  4. Create rules to decide whether a page should be indexed or not (for example, index the profile page of a user who has commented more than 20 times).

What are the robot tag settings?

The following are the different parameters that search engine crawlers understand and follow when used in robots meta tags. Note that while these parameters are not case sensitive, some search engines may not follow all of them or treat them differently.

  • All: This is the default tag and tells the search engine to index the page.
  • Follow: This is also a default tag and instructs the crawler to follow all the links on the page and pass on the equity to the linked pages.
  • Noindex: This tag tells search engines not to index the page, but the links on it will still be followed by the crawler. This is useful for pages with paid content or for preventing duplicate content penalties.
  • Nofollow: This tag instructs the crawler not to follow the links on the page and not to pass on link equity. It is helpful for preventing spammers from taking advantage of your content.
  • None: This tag is equivalent to using the noindex and nofollow tags together and instructs the crawler to completely ignore the page.
  • Noimageindex: This tag tells the crawler not to index images on the page, which protects them from being used without permission.
  • Noarchive: This tag prevents search engines from displaying a cached link to the page and is useful for protecting sensitive content.
  • Overnight: This tag is similar to noarchive but is only used by Internet Explorer and Firefox.
  • Nosnippet: This tag tells search engines not to display an excerpt from the page in the search results and is useful for controlling the metadata that appears.
  • Max-snippet: [number]: This tag tells the crawler the specific number of characters to show in SERP snippets and applies to all types of search results except for structured data.
  • Unavailable_after [DATE and TIME]: This tag tells search engines not to index the page after a particular date.
  • Notranslate: This tag prevents Google from displaying a link to translated content for the page in the search results.
  • Max-image-preview: [PARAMETER]: This tag sets the maximum size of an image preview for the page in the search results.

Three values ​​are accepted:

The options for displaying image previews in AMP pages are as follows: "none" which means no preview image will be shown, "standard" which displays a default preview image, and "large" which shows a bigger preview image. By choosing the "standard" or "none" options, it's possible to avoid the display of large thumbnails in search results for AMP pages.

How to avoid 3 common mistakes when utilizing robots meta tags

Avoid these three mistakes to improve your SEO using meta robots tags:

Three common SEO mistakes that website owners make involve typography, conflicting tags, and confusion between "noindex" and "disallow" commands.

  1. Firstly, it's important to use lowercase tags and include commas and spaces for better readability and ease of understanding. While crawlers can recognize attributes and parameters in both uppercase and lowercase, using lowercase tags can improve code readability.
  2. Secondly, using conflicting tags can lead to indexing errors. Crawlers tend to favor more restrictive values, so if you have multiple meta tags with conflicting values, the more restrictive one will be considered. To prevent tracking, it's best to use the "nofollow" tag instead of the default "follow" tag.
  3. Finally, it's important to understand the difference between "noindex" and "disallow" commands. "Noindex" prevents robots from indexing a page, while "disallow" prevents them from crawling it. To prevent a page from being crawled, use the "disallow" command in the robots.txt file. To unindex a page, add both "noindex" and "disallow" commands to the page header.

My name is Duc "JOSEF" Le and I work in Digital Marketing at Mageplaza and BlogAvada. Mageplaza offers a comprehensive collection of over 230 extensions that are designed to work seamlessly with the latest versions of Magento 2 (Adobe Commerce). Meanwhile, BlogAvada is a blog that serves as a platform for sharing information related to websites, mobile apps, e-commerce, digital marketing, and other related topics. I encourage you to visit our websites to learn more about what we have to offer.

Font Size
lines height