Googlebot crawls, renders, and indexes a page.
Googlebot queues pages for both crawling and rendering. It is not immediately obvious when a page is waiting for crawling and when it is waiting for rendering.
When Googlebot fetches a URL from the crawling queue by making an HTTP request it first checks if you allow crawling. Googlebot reads the robots.txt file. If it marks the URL as disallowed, then Googlebot skips making an HTTP request to this URL and skips the URL.
Googlebot then parses the response for other URLs in the
href attribute of HTML links and adds the URLs to the crawl queue. To prevent link discovery, use the nofollow mechanism.
Describe your page with unique titles and snippets
Unique, descriptive titles and helpful meta descriptions help users to quickly identify the best result for their goal and we explain what makes good titles and descriptions in our guidelines.
Write compatible code
Use meaningful HTTP status codes
Googlebot uses HTTP status codes to find out if something went wrong when crawling the page.
You should use a meaningful status code to tell Googlebot if a page should not be crawled or indexed, like a
404 for a page that could not be found or a
401 code for pages behind a login. You can use HTTP status codes to tell Googlebot if a page has moved to a new URL, so that the index can be updated accordingly.
Avoid soft 404 errors in single-page apps
In client-side rendered single-page apps, routing is often implemented as client-side routing. In this case, using meaningful HTTP status codes can be impossible or impractical. To avoid soft 404 errors when using client-side rendering and routing, use one of the following strategies:
- Add a
Use the History API instead of fragments
For single-page applications with client-side routing, use the History API to implement routing between different views of your web app. To ensure that Googlebot can find links, avoid using fragments to load different page content.
Use long-lived caching
main.2bb85551.js. The fingerprint depends on the content of the file, so updates generate a different filename every time.
Use structured data