Thursday 1 November 2007

What is Engine Optimization Search Strategy?

How do search engines decide where to place each page in there results? Searchers and webpage designers alike have asked this question. What is engine optimization search strategy? For searchers knowing how a search engine categorizes a page can make for easy searching and for webpage masters having an understanding of how the search engine ranks pages can help them to gain a competitive edge and increase their standing in the search engine.

A search engines objective is to present each result in the order of its relevancy to the search query. How to do so in an automated way, for every page on the Web and for every possible combination of keywords in all the languages people speak is a very complicated task.

Each search engine uses a very elaborate system to determine where a page will be presented during an internet search. The process of making web pages more attractive to a search engine is known as search engine optimization or SEO.

Studies show that the average internet user looks no further then the second page of a search result. Because of this its very important to websites to be listed on the first two pages and may undergo illegal (black hat) methods to obtain a top ranking.

Webmasters who stick to more respectable forms of engine optimization search strategy use various techniques to improve their search engine ranking. One of the first things a webmaster may do is to check their META tags. META tags are special HTML tags that provide information about a webpage. META tags are used by search engines to properly index a webpage but they are not visible to the web browser. META tags are a significant factor to getting ranked. The following examples show two different META description tags taken from webdesign.org

Having a well written META tag can greatly increase your page rank. On that note, META tags alone will not instantly put in the top spot of a search result.

A Webmaster also has to consider creating a web-crawler friendly webpage. Search engines rely on a little device called a web crawler (also referred to as spiders, bots, worms, web robots and automatic indexers) to locate and then catalog websites. Web crawlers are computer programs that browse the World Wide Web in a methodical, automated manner. Web crawlers locate a website and crawl all over it, reading the algorithms and storing the data. Once they have collected all the information from the website they bring it back to the search engine where it is indexed. Some search engines also use web crawlers to harvest e-mail addresses and for maintenance tasks, in addition to collecting information about a web site. Each search engine has their own individual web crawlers and each search engine has variations on how they gather information.

Webmaster should have some indication of how each search engine ranks pages, in order to understand why one document is ranked higher than another. Although search engines keep their algorithms a secret, in part to make life more difficult for the Black Hat SEOs web masters should try their best to gain some understanding of how engine optimization search strategy should work.

About the Author

Jeff Casmer is an internet marketing consultant and work at home business owner. For more information on search engines optimization please visit his "Top Ranked" Get Higher Search Engine Rankings Directory gives you all the information you need to Earn Money at Home in the 21st century.

0 comments: