Looking at a site I used to hate but now love, carinsurancefor1day.co.uk, after it's mega makeover made me think: how has website design changed in the last ten years? What new challenges have the rise of search engines created?

If you have been following the Internet since its inception, then you may have noticed that the landscape has changed quite a bit. The rise of the search engines, in particular, have made the Internet tremendously more accessible than it ever was before. The development and improvement of search engines, which has continued unabated for the past ten years, has in turn led to a shift in the way that web sites are designed. Whereas the first web sites were largely designed with the content as the most important factor, now web sites must be designed with the knowledge that they need to be friendly to search engines if they want to attract a large amount of traffic.

Check out the top sites for  temporary car insurance

Ten years ago, the most popular method that Internet users used to find web sites was through the use of web directories. These web directories actually had a very similar look and feel to the modern search engines that dominate the Internet today. If you were to use a web directory to find the information that you were looking for, then you might think that you were using a search engine. However, there were a few fundamental differences between the old web directories and today's search engines. The web directories of the past were largely compiled by a team of humans. If someone had developed a web site, then they would need to contact the web directory and notify them of their site. Typically, they would provide a description and a few other details. Then, the people who worked at the web directory would review the site and decide whether or not to include it in their directory. If they decided to include it in their directory, then they would index it in an appropriate category and they would index all the content of the site in their directory. One major advantage of this process was that it kept spammy websites to a minimum. Since each site had to be manually reviewed, it was unlikely that a spammy web site could make it through such a strict filter.

There were quite a few disadvantages, on the other hand, compared to modern search engines. For one thing, it might take months for a site to be reviewed. As the Internet gained in popularity, the backlog of sites waiting to get reviewed became increasingly large and burdensome. In the second place, the number of sites and information included in the indices of these web directories were orders of magnitudes smaller than there are in modern search engines like Google.

This method was necessary, because the earliest true search engines were terrible at providing quality results to the searcher. If a search engine returned results based on the search phrase, then any spammer could show up as the #1 result by simply repeating that phrase over and over. It wasn't until Google pioneered a new way of gauging the quality of sites that modern search technology really began to take off. One of the features of Google's search technology is that it would determine the quality of a web age based on how many incoming links were directed at the page and from where those links originated from. They also implemented other measures that analyzed features that were on the particular web page to determine whether or not it was a spam page. Their goal was to develop a general algorithm that could produce good results, rather than subjecting each page to a manual review.

This has, of course, changed the way that web developers act when they design new web pages. Now, they need to keep in mind the search criteria that major search engines like Bing, Yahoo, and Google use to produce their results. The search algorithms of these companies are kept secret, and they are constantly changing. This has made the process of SEO (search engine optimization) a bit of a cat and mouse game for web developers.


Copyright Mark Driscoll 2010