What is Robot.txt file?

A robots.txt file tells search engine crawlers which pages or files the crawler can or can't request from your site. This is used mainly to avoid overloading your site with requests; it is not a mechanism for keeping a web page out of Google. To keep a web page out of Google, you should use noindex directives, or password-protect your page.

Just add the below code in your blog

Robot .txt Code:


User-agent: Mediapartners-Google>
Disallow:

User-agent:
Disallow: /search
Disallow: /b
Allow: /

Sitemap: https://www.whizzyshubham.blogspot.com/sitemap.xml

example:
ideal4tech
robot .txt code