These docs are for v20.3.44. Click to read the latest docs for v20.3.186.

Discussions

Ask a Question
Back to All

Google can't crawl the website

Greetings all,

A little while ago, a client of ours asked us to perform a number of tasks recommended by Google for new sites, one of which was to create a robots.txt file. After creating it and preventing Googlebot from looking through certain documents folders, the client wrote back and showed us that Google could no longer crawl their home page. Ultimately the client decided they didn't really want the robots.txt file anyway, so I just deleted it. However, Google still isn't able to crawl the home page, and blames it on a robots.txt file. My first thought was to recreate a robots.txt file that expressly allowed Googlebot to crawl the entire website, but that hasn't fixed the issue either.

I'm honestly baffled why this is happening. To be fair, I don't have any way of knowing that Google was able to crawl the home page before I created the robots.txt file, so I can't be entirely sure that it's related, but I've been operating under the assumption that it is. I saw someone with a Wordpress site have a similar issue, and he was able to solve it by finding an option in Wordpress that allowed/disallowed Google from crawling the site. I looked, but I haven't seen any such options in RiSE.

Has anyone else experienced a problem like this?