Web Robot that Gather Web-page Information

Amfibi's Web-crawling Robot

What's a Bot?

A Bot is common parlance on the Internet for a software program that is a software agent. A Bot interacts with other network services intended for people as if it were a real person. One typical use of bots is to gather information. The term is derived from the word "robot", reflecting the autonomous character in the "virtual robot"-ness of the concept.
The most common bots are web agents that interface with web pages. Web crawlers or spiders are web robots that recursively gather web-page information.
From Wikipedia, the free encyclopedia.

Cabot is Amfibi's web-crawling robot. It collects documents from the web to build a searchable index for the Amfibi search engine.

Information for webmasters

Cabot obeys the robots.txt exclusion standard, described at http://www.robotstxt.org/. Thus to prevent Cabot agent to crawl pages from your site, place the following in your robots.txt file:
User-agent: Cabot
Disallow: /
You can submit your site for inclusion by clicking here.

Guaranteed Inclusion - Featured Listing - Link to Us 
| More

Copyright © 2013 Amfibi. All rights reserved
Privacy - Terms - Legal - Search