Crawling

Crawling is the systematic process search engine bots use to ‘read’ web pages and gather information about the content, structure and internal linking. 

Search engines like Google then use this information to decide whether a web page should be indexed in search results and where it ranks. Websites with crawl-friendly structures and well-optimised content are more likely to be indexed and rank highly in search engine results. 

See also: Google Search Console, Sitemaps, Indexing