Most of the common users or visitors use different available search engines to search out the piece of information they required. But how this information is provided by search engines? Where from they have collected these information? Basically most of these search engines maintain their own database of information. These database includes the sites available in the webworld which ultimately maintain the detail web pages information for each available sites. Basically search engine do some background work by using robots to collect information and maintain the database. They make catalog of gathered information and then present it publicly or at-times for private use.
In this article we will discuss about those entities which loiter in the global internet environment or we will about web crawlers which move around in netspace. We will learn
-> What its all about and what purpose they serve ?
-> Pros and cons of using these entities.
-> How we can keep our pages away from crawlers ?
-> Differences between the common crawlers and robots.
In the following portion we will divide the whole research work under the following two sections...