Tuesday, March 16, 2010

How Search Engines Work

By Justin Harrison

Search engines employ automated processes or robots, casually known as 'spiders' or 'crawlers,' to find various sites. They're an important part of the whole internet infrastructure, but why is that so? What do they do exactly?

A search engine robot is a very simple program that has some basic functionality to help it understand web pages. However, spiders only have limited functionality to interpret websites: they cannot interpret frames, Flash video, images, or JavaScript; they can't enter password-protected areas and can't click buttons; they can be stopped by dynamically-generated URLs and JavaScript navigation. However, within HTML code, they're able to retrieve data by travelling through the web to find information and links.

Spiders are able to determine the content of your page by looking at the visible text, the HTML code, and links. Based on the words it finds, the spider determines what the site is about using a complex algorithm to determine what is and isn't important. Spiders also collect links from websites to follow later, which allows them to effectively hop from site to site to site. Since the entire internet is made up of links between websites, the robots use them to make their way through the internet as they search.

Submitting a new URL to a search engine adds this URL to the queue which the spiders are due to 'crawl' or visit. However, even if a URL isn't submitted directly, the spiders usually find it through links from other websites. If you build link popularity, this will help the spiders find you faster. When the robots arrive, they'll check your site for a file called 'robots.txt,' which will tell them what areas of the website they are not allowed to visit. Off-limits files may include things like binaries or other information that the spiders need not report back.

Once the spider has gathered all the information it needs, and based on how the spider is set up in the search engine, it will index the site information and send it to the search engine database.

Once in the database, the information becomes part of the search engine directory and ranking process. Indexing is based on how the search engine engineers have decided to evaluate information returned by the spiders. When you enter a query into a search engine, it uses several calculations behind the scenes to determine which results you're most likely looking for, out of the sites the spiders have returned. The database selects the best matches and displays them. The database is constantly updated by spiders crawling websites over and over again, to make sure that the most up-to-date information is available.

The search engine sorts the information that has been delivered to the databases which has become a part of the search engine and directory ranking process. This allows it to display the results. Databases are updated periodically. Robots visit you regularly to find any changes to your pages so that the latest information will be available. The way in which the search engine is set up determines how the number of visits you get is calculated. This can vary with different search engines. If your website is down or experiencing a large amount of traffic, the robot may not be able to access the page they are trying to visit. The website may not be re-indexed when this occurs. This depends on how frequently your site is visited by the robot. In the hope that your site will be accessible again, the robot will re-visit your site to see if it has become accessible.

About the Author:

No comments:

Post a Comment