A search engine spider is actually just a server based software application that is designed to compile and maintain search engine databases. They are also known as robots. These applications were nicknamed spiders because they craw the web looking for information to add to the search engines database.
In theory a search engine spider does it s job by following networks of links to find and then grab information from your web site pages. Each search engine (such as Google, Excite, Alta Vista, Lycos etc.) has its own criteria for how it spiders information. This criterion is put together into a specific algorithm that helps the search engine determine where your site fits in its ranking system. The problem of second guessing search engine spiders is that they are always changing the requirements of this algorithm so web masters are always second guessing how to get their URLs listed in the top web site positions. Also the different search engines have different criteria for how they rank information and might have to do with their databases needs. One search engine may rank your site in terms of its content and yet another may be more interested in the number of links you...