WWW wanderers or spiders are programs that traverse many pages in the World Wide Web by
recursively retrieving linked pages. Search engines like Google, frequently spider web pages for
indexing. How will you stop web spiders from crawling certain directories on your website?
A.
Place robots.txt file in the root of your website with listing of directories that you don’t want to be
crawled
B.
Place authentication on root directories that will prevent crawling from these spiders
C.
Enable SSL on the restricted directories which will block these spiders from crawling
D.
Place “HTTP:NO CRAWL” on the html pages that you don’t want the crawlers to index
Explanation:
A is the Ans.