Ever questioned what would occur in the event you prevented Google from crawling your web site for just a few weeks? Technical search engine marketing knowledgeable Kristina Azarenko has revealed the outcomes of such an experiment.
Six stunning issues that occurred. What occurred when Googlebot couldn’t crawl Azarenko’s web site from Oct 5 to Nov. 7:
- Favicon was faraway from Google Search outcomes.
- Video search outcomes took an enormous hit and nonetheless haven’t recovered post-experiment.
- Positions remained comparatively steady, besides have been barely extra risky in Canada.
- Site visitors solely noticed solely a slight lower.
- A rise in reported listed pages in Google Search Console. Why? Pages with noindex meta robots tags ended up being listed as a result of Google couldn’t crawl the location to see these tags.
- A number of alerts in GSC (e.g., “Listed, although blocked by robots.txt”, “Blocked by robots.txt”).
Why we care. Testing is an important aspect of search engine marketing. All adjustments (intentional or unintentional) can affect your rankings and visitors and backside line, so it’s good to grasp how Google may probably react. Additionally, most corporations aren’t capable of try this form of an experiment, so that is good info to know.
The experiment. You possibly can learn all about it in Surprising Outcomes of My Google Crawling Experiment.
One other related experiment. Patrick Stox of Ahrefs has additionally shared outcomes of blocking two high-ranking pages with robots.txt for 5 months. The affect on rating was minimal, however the pages misplaced all their featured snippets.
New on Search Engine Land