What Is A Search Engine Crawler?
A search engine crawler is an Internet bot that systematically browses the World Wide Web in order to create an index of data. Also known as a web crawler or search engine spider.
Search engine crawlers analyze a web page’s content so they can store that information in the search index to retrieve it when necessary. Search engine crawlers also crawl hyperlinks to discover new documents on the web.
In addition to conducting an initial site crawl that indexes pages, these Internet bots also revisit pages occasionally to check for updates. Search engine crawlers are not programmed to crawl every page on the Internet; however, the number of links leading to a page, the amount of traffic a page gets, how often the content changes and countless other factors determine the crawl frequency and budget for each website.
Website owners have the right to deny crawler access to specific pages and directories by setting up rules in the domain’s robot.txt file.
Visit the SEO Glossary
Go to the SEO Glossary to find more terms and definitions that relate to the field of search engine optimization.

The Editorial Staff at SEO Chatter is a team of search engine optimization and digital marketing experts led by Stephen Hockman with more than 15 years of experience in search engine marketing. We publish guides on the fundamentals of SEO for beginner marketers.