Monday 2 May 2016

How Google Works


Why Google is called Google

The name Google was the result of a small spelling mistake by a close associate of Larry Page.


It is unbelievable, but true! In 1996, it so happened that Larry Page and Sean Anderson, a graduate student working with him, were seated in their office brainstorming on the name of the search engine. It is said that they were using a whiteboard to think of a good name. They were thinking of naming it as something related to the voluminous data the search engine indexed. Sean suggested the name 'googolplex' to which Larry responded saying, "googol!" Incidentally, the word 'googol' refers to a cardinal number represented as 1 followed by 100 zeroes. Sean was quick to search the Internet domain name registry database to see if the newly suggested name was available. Interestingly, Sean misspelled 'googol' as 'google' and found it to be available. Larry Page liked the name and soon got it registered in the registry database. And this very powerful search engine of the day got its name. Part of this is also mentioned in Google's History Page.

When Google was founded

Beginning

 

Google began in 1996 as a research project by Larry Page and Sergey Brin Ph.D. students at Stanford University.[2]
In search of a dissertation theme, Page had been considering—among other things—exploring the mathematical properties of the World Wide Web, understanding its link structure as a huge graph.[3] His supervisor, Terry Winograd, encouraged him to pick this idea (which Page later recalled as "the best advice I ever got"[4]) and Page focused on the problem of finding out which web pages link to a given page, based on the consideration that the number and nature of such backlinks was valuable information for an analysis of that page (with the role of citations in academic publishing in mind).[3]
In his research project, nicknamed "BackRub", Page was soon joined by Brin, who was supported by a National Science Foundation Graduate Fellowship.[5] Brin was already a close friend, whom Page had first met in the summer of 1995—Page was part of a group of potential new students that Brin had volunteered to show around the campus.[3] Both Brin and Page were working on the Stanford Digital Library Project (SDLP). The SDLP's goal was “to develop the enabling technologies for a single, integrated and universal digital library" and it was funded through the National Science Foundation, among other federal agencies.[5][6][7][8]
Page's web crawler began exploring the web in March 1996, with Page's own Stanford home page serving as the only starting point.[3] To convert the backlink data that it gathered for a given web page into a measure of importance, Brin and Page developed the PageRank algorithm.[3] While analyzing BackRub's output—which, for a given URL, consisted of a list of backlinks ranked by importance—the pair realized that a search engine based on PageRank would produce better results than existing techniques (existing search engines at the time essentially ranked results according to how many times the search term appeared on a page).[3][9]

Convinced that the pages with the most links to them from other highly relevant Web pages must be the most relevant pages associated with the search, Page and Brin tested their thesis as part of their studies, and laid the foundation for their search engine.:[10]
Some Rough Statistics (from August 29th, 1996)
Total indexable HTML urls: 75.2306 Million
Total content downloaded: 207.022 gigabytes
...
BackRub is written in Java and Python and runs on several Sun Ultras and Intel Pentiums running Linux. The primary database is kept on a Sun Ultra II with 28GB of disk. Scott Hassan and Alan Steremberg have provided a great deal of very talented implementation help. Sergey Brin has also been very involved and deserves many thanks.
-Larry Page [11]

Where is Google

The Googleplex is the corporate headquarters complex of Google, Inc., located at 1600 Amphitheatre Parkway in Mountain View, Santa Clara County, California, United States, near San Jose.
The original complex, with 2,000,000 square feet (190,000 m2) of office space, is the company's second largest square footage assemblage of Google buildings. (The largest single Google building is the 2,900,000-square-foot (270,000 m2) 111 Eighth Avenue building in New York City, which Google bought in 2010.) Once the 1,100,000-square-foot (100,000 m2) Bay View addition went online in 2015, the Googleplex became the largest collection of Google buildings with 3,100,000 square feet (290,000 m2) of space.[1]
"Googleplex" is a portmanteau of Google and complex and a reference to googolplex, the name given to the large number 10(10100), or 10googol (with complex meaning a complex of buildings).

What is Google

Originally known as BackRub, Google is a search engine that started development in 1996 by Sergey Brin and Larry Page as a research project at Stanford University. Larry and Sergey decide the name of their search engine needs to change and decide upon Google, which is inspired from the term googol.

 The domain google.com was later registered on September 15, 1997, and the company incorporated on September 4, 1998. The picture below is a capture of the site from The Internet Archive of what Google looked like in 1998.

What helps Google stand out from its competition and helps it continue to grow and be the number one search engine is its PageRank technique that sorts search results. While being one of the best search engines on the Internet, Google also incorporates many of its other services, such as Google Maps and Google Local, to provide more relevant search results.

Who created Google

Larry Page


Lawrence "Larry" Page[2] (born March 26, 1973) is an American computer scientist and Internet entrepreneur who co-founded Google Inc. with Sergey Brin, and is the CEO of Google's parent company, Alphabet Inc. After stepping aside as CEO in August 2001 in favour of Eric Schmidt, Page re-assumed the role in April 2011. He announced his intention to step aside a second time in July 2015 to become CEO of Alphabet, under which Google's assets would be reorganized. Under Page, Alphabet is seeking to deliver major advancements in a variety of industries.[3] Page is the inventor of PageRank, Google's best-known search ranking algorithm.[4][5][6][7][8][9]
Page is a board member of the X Prize Foundation (XPRIZE) and was elected to the National Academy of Engineering in 2004.[10] Page received the Marconi Prize in 2004.[11]

Sergey Brin

Sergey Mikhaylovich Brin (Russian: Серге́й Миха́йлович Брин; born August 21, 1973) is a Russian-born American computer scientist, internet entrepreneur, and philanthropist. Together with Larry Page, he co-founded Google. Today, Brin serves as President of Google's parent company, Alphabet Inc. According to Forbes List February 2016, he is jointly one of three people listed as 11th richest in the world (21 overall), with a net worth of US$39.2 billion.[3][6]
Brin immigrated to the United States with his family from the Soviet Union at the age of 6. He earned his bachelor's degree at the University of Maryland, following in his father's and grandfather's footsteps by studying mathematics, as well as computer science. After graduation, he moved to Stanford University to acquire a PhD in computer science. There he met Page, with whom he later became friends. They crammed their dormitory room with inexpensive computers and applied Brin's data mining system to build a web search engine. The program became popular at Stanford and they suspended their PhD studies to start up Google in a rented garage.
The Economist referred to Brin as an "Enlightenment Man", and as someone who believes that "knowledge is always good, and certainly always better than ignorance", a philosophy that is summed up by Google's mission statement, "Organize the world's information and make it universally accessible and useful,"[7][8] and unofficial motto, "Don't be evil".

 


 

Google



Google is an American multinational technology company specializing in Internet-related services and products. These include online advertising technologies, search, cloud computing, and software.[6] Most of its profits are derived from AdWords,[7][8] an online advertising service that places advertising near the list of search results.
Google was founded by Larry Page and Sergey Brin while they were Ph.D. students at Stanford University. Together, they own about 14 percent of its shares and control 56 percent of the stockholder voting power through supervoting stock. They incorporated Google as a privately held company on September 4, 1998. An initial public offering followed on August 19, 2004. Its mission statement from the outset was "to organize the world's information and make it universally accessible and useful,"[9] and its unofficial slogan was "Don't be evil".[10][11] In 2004, Google moved to its new headquarters in Mountain View, California, nicknamed the Googleplex.[12] In August 2015, Google announced plans to reorganize its interests as a holding company called Alphabet Inc. When this restructuring took place on October 2, 2015, Google became Alphabet's leading subsidiary, as well as the parent for Google's Internet interests.[13][14][15][16][17]
Rapid growth since incorporation has triggered a chain of products, acquisitions and partnerships beyond Google's core search engine (Google Search). It offers online productivity software (Google Docs) including email (Gmail), a cloud storage service (Google Drive) and a social networking service (Google+). Desktop products include applications for web browsing (Google Chrome), organizing and editing photos (Google Photos), and instant messaging and video chat (Hangouts). The company leads the development of the Android mobile operating system and the browser-only Chrome OS[18] for a class of netbooks known as Chromebooks and desktop PCs known as Chromeboxes. Google has moved increasingly into communications hardware, partnering with major electronics manufacturers[19] in the production of its "high-quality low-cost"[20] Nexus devices.[21] In 2012, a fiber-optic infrastructure was installed in Kansas City to facilitate a Google Fiber broadband service.[22]
The corporation has been estimated to run more than one million servers in data centers around the world (as of 2007).[23] It processes over one billion search requests[24] and about 24 petabytes of user-generated data each day (as of 2009).[25][26][27][28] In December 2013, Alexa listed google.com as the most visited website in the world. Numerous Google sites in other languages figure in the top one hundred, as do several other Google-owned sites such as YouTube and Blogger.[29] Its market dominance has led to prominent media coverage, including criticism of the company over issues such as aggressive tax avoidance,[30] search neutrality, copyright, censorship, and privacy.[31][32]

History Search Engines - AllTheWeb

AllTheWeb – Born in 1999

 AllTheWeb was launched in 1999 showcasing FAST’s Web Search Division search technologies. It is sometimes referred to as FAST or FAST Search. In April 2003, ALLTheWeb was bought by Overture for $70 million in 2003 and rolled into Yahoo! Search after Yahoo! purchased Overture.

History Search Engines - Overture

Overture – Born in 1998

 Overture is considered the pioneer of paid search. It was launched by Bill Gross in 1998 as GoTo. Overture was overpowered by Google when AOL selected Google as an ad partner. Overture bought AltaVista and AllTheWeb with hopes of gaining leverage against Google, but they ultimately were purchased by Yahoo! in 2003.

History Search Engines - MSN Search

MSN Search (now Windows Live) – Born in 1998

 
MSN Search was a service offered as part of Microsoft’s network of web services. The Microsoft Network debuted as an online service and Internet service provider in August 1995. During the 1990s, Microsoft launched Internet Explorer as a bundled part of their operating system and software products. MSN Search first launched in 1998 displaying search results from Inktomi and later blending results with Looksmart. For a short time in 1999, AltaVista search results were used instead of Inktomi. Since 2004, MSN Search began using its own built-in search results. Since this time, MSNBot has continually crawled the web. Today, image search is powered by Picsearch. MSN Search was renamed Windows Live in 2006.

History Search Engines - Google

Google – Born in 1997

 Google was founded in 1998 as another school project at Stanford University in California. In January 1996, Stanford PhD students Larry Page and Sergey Brin began researching the concept of a search engine based on relevancy ranking. Page and Brin believed that search engines should analyze and rank websites based on the number of times search terms appeared on web pages. Likewise, Page and Brin developed a search engine nicknamed “BackRub.” BackRub checked the number and quality of links coming back to websites in order to estimate the value of a website. Brin and Page’s research eventually led them to develop the trademarked PageRank™ link analysis algorithm that Google’s search engine would use to assign a numerical weighting to hyperlinked document elements.

In 2000, Google replaced Inktomi as the provider of search results to Yahoo! and later AOL and Netscape. Even though Yahoo! broke away from Google in 2004, its market share has continued to grow to account for about 70 percent of all web searches. Google’s market share has steadily increased over the years.

History Search Engines - Ask Jeeves

Ask Jeeves (now Ask) – Born in 1997

 Ask was developed in 1996 by Garret Gruener and David Warthen and launched in 1997 as Ask Jeeves. In 2006, the “Jeeves” name was removed; revamping its image after Ask Jeeves was purchased in 2005 by Barry Diller’s InterActiveCorp (IAC). Originally as Ask Jeeves, human editors listed the prominent sites along with paid listings and results pulled from partner sites. Following acquisition of Direct Hit in 2000 and Teoma in 2001, Ask commenced developing its own search technology. With financial growth, Ask has acquired other companies including Excite and iWon.

Today, with emphasis on paid inclusion listings, Ask struggles for market share against Google, Yahoo!, and MSN Search.

History Search Engines - Inktomi

Inktomi – Born in 1996

 
Inktomi was founded in February 1996 by Eric Brewer, an assistant professor of computer science at the University of California Berkeley, and Paul Gauthier, a graduate student. They were involved in a government research project funded by the US government’s Advanced Research Projects Agency on parallel computing between personal computers and workstations making them function like a supercomputer. In the process they founded a company based upon their success developing a search tool.

The company was named after a mythical Lakota Indian spider known for cunning rather than brute force. After formation with Dave Perterschmidt as CEO, Brewer as the chief scientist, and Gauthier as the chief technology officer, the company established its first customer with HotWired who introduced the search engine HotBot in 1996. HotBot included Inktomi’s Audience 1 software that customized web pages and advertisements according to the user’s browser. HotBot eventually evolved into newer versions including 5.0 released in 1998 using a Windows NT rather than Unix platform. The interface and server infrastructure was changed to increase usability and offer new features.
Inktomi was not just a search provider; its mission was to build scalable software applications at the core of the Internet. As such, Inktomi’s efforts drove many of the search tool developments and functionalities of known search tools and also management tools of Internet traffic and capabilities in general. For example in 1997, Inktomi beta tested web traffic servers that managed network data flow eliminating bottlenecks and redundant Internet and corporate Intranet traffic and bandwidth. The servers used caching to create localized repositories of information. This moved information closer to the user rather than relying each time on the Internet’s backbone. At the time, the amount of redundant traffic on the Internet was between 40 and 80 percent. Accordingly, a large market existed for such technology with Internet service providers, network providers, and business enterprises. A large customer who took interest in Inktomi’s traffic server technologies was Microsoft in 1997. Microsoft agreed to use Inktomi’s traffic server and search engine technology in the Microsoft Network starting in 1998.
Starting in 1998, Inktomi signed many customer deals. America Online and Digex Inc. became licensees of Inktomi’s Traffic Server and began using it in their own networks later that year. Inktomi also worked out deals with Digital Equipment Corporation and Intel to port Traffic Server with their Unix and Windows NT platforms. Also in May of 1998, Yahoo! decided to use Inktomi’s search engine technology as their preferred choice. All of these deals influenced Inktomi’s initial public offering in June 1998. The price of the share the first day doubled from $18 to $36 and a month later was trading at $90. In November 1998 it had climbed to $130 a share.
Some other significant events in 1998 included the release of Traffic Server 2.0 with a streaming media cache, more protocols, and other performance and support functions. That same year, Inktomi acquired C2B Technologies for about $90 million, helping development of shopping search abilities and services for customers like Yahoo! and New Media. The shopping engine debuted in the spring of 1999. About the same time, Inktomi acquired ImpulseBuy.net for $110 million, providing a database and more capabilities for merchants. As its ecommerce shopping services expanded in 1999, Inktomi upgraded and released Traffic Server 3.0. Traffic Server 3.0 included more support for operating systems including Windows NT and new application programming interfaces allowing third party providers to provide value-added services. As a result, with Traffic Server 3.0’s release, Inktomi announced partnerships with six service partners.
In August 1999, a secondary stock offering was made raising $300 million. About the same time, Inktomi purchased WebSpective Software for $106 million. Towards the end of the year, Inktomi released Traffic Server E5000 and Traffic Server E200. These traffic servers helped corporate networks manage data for users without having to use the entire network and server infrastructure. At the end of 1999, America Online dropped Excite for Inktomi’s database to power its search engine. Likewise, MSN dropped AltaVista, whom they had moved to using earlier in the year, for Inktomi.
In 2000, Inktomi created alliances with several partners to enter the wireless Internet infrastructure market. In June 2000, Inktomi acquired Ultraseek from Infoseek for $344.7 million. Inktomi announced creation of a 500 million record search engine database called GEN3. In August 2000, Inktomi created an alliance with America Online and Adero Inc. to form the Content Bridge distribution network. The Content Bridge would allow web producers and hosts of information and ecommerce, to pay to have their content pushed to caching servers of a large network of Internet hosting delivery providers. Right before the Content Bridge was to begin operation in January 2001, Adero backed out and sold its interest to Inktomi for $23.5 million.
In another big business deal in 2000, Inktomi acquired FastForward Networks for $1.3 billion. FastForward Networks dropped its name and became a part of Inktomi’s media division. FastForward Network was a software developer for Internet broadcasts providing support for thousands of simultaneous Internet broadcasts. With this acquisition, Inktomi was able to release a product suite called Media Distribution Network that handled distribution of steaming media in a network. The Media Distribution Network suite complimented well their Content Delivery Suite for managing and distributing static content.
In 2001, Inktomi introduced its Search Everywhere solution integrating Inktomi’s various search products. Inktomi also commenced enhancing its search engine software with enterprise-level XML (Extensible Markup Language) and more comprehensive search results including relevance, classification and rankings. Inktomi added a distributed crawling architecture scanning the web more frequently with content blending from separate databases. In February 2001, Inktomi released Traffic Server 4.0 extending the platform to Linux operating systems with increased processing power and performance. In May 2001, Inktomi introduced a pay-for-placement program called Index Connect in which participants could submit meta information about multimedia and other files enabling them to appear in search results.
About the same time, in order to lessen the strain of its ecommerce business, Inktomi sold off its ecommerce division with its shopping search engine and customer base to e-centives, an online marketing firm. Meanwhile, Inktomi expanded its content distribution services. In July 2001, Inktomi acquired eScene Networks with its streaming media sales. From this acquisition was developed the Inktomi Media Publisher with capabilities for business to catalog, index and publish their multimedia content.
Despite successful acquisitions and product releases, Inktomi sustained financial losses during 2001. With the dotcom bubble bust came economic strains to the Internet business sector. As a result, Inktomi cut back its workforce. Inktomi’s stock values decreased, causing a significant loss for the year.
In March 2003, Inktomi was purchased by Yahoo! for $235 million. Inktomi continued to provide results to Yahoo! rival MSN Search. Meanwhile, Google continued to provide results to Yahoo! In February 2004, Yahoo! replaced Google with a search engine based on Inktomi’s technologies.

History Search Engines - AltaVista

AltaVista – Born in 1995

 
AltaVista, meaning “a view from above,” developed out of research by scientists at Digital Equipment Corporation’s (DEC) Western Research Laboratory in Palo Alto, California during the spring of 1995. They were trying to showcase their computer database system called the Alpha 8400 TurboLaser that was faster than its competitors. Scientists developed a search tool to crawl, store and quickly index every word of all HTML web pages on the Internet. This new search tool was powerful. For example, in August 1995, it conducted its first full-scale crawl of the web bringing back about 10 million pages.

The two key scientists involved with AltaVista’s development were Louis Monier and Michael Burrows. Louis wrote the crawler (called Scooter) and Michael wrote the indexer. After testing the new search engine with 10,000 DEC employees, AltaVista opened to the public on December 15, 1995 at altavista.digital.com. Initially, the back-end processing machines received 13 million queries per day. With its release, AltaVista became the first searchable full-text database on the World Wide Web with a simple interface. Over 300,000 visitors used the search tool on the first day and had 19 million hits by the end of 1996, and 80 million per day at the end of 1997.
Ironically in 1996, AltaVista started exclusively providing search results for Yahoo! who would one day become AltaVista’s owner. Before this time, DEC was acquired by Compaq for $9.6 billion at the start of 1998. Despite success as a modest search interface, in 1999 Compaq released AltaVista as a web portal once again ironically hoping to compete with Yahoo! It may be reasoned that AltaVista’s portal strategy was a cause of its eventual decline as well as the rise of Google. By 2002, efforts returned to sharpening the quality of the simple search interface.
In June 1999, CMGI, an Internet investment company with 20% ownership in Lycos, agreed to acquire 83 percent of AltaVista. CMGI planned a public offering of AltaVista in April 2000, but cancelled the IPO as the Internet bubble collapsed.
Near the end of 2002, AltaVista became the first Internet search engine to offer image, audio, and video search as part of a new range of multimedia functionalities. Additionally, AltaVista gained innovative recognition for its release of Babel Fish, the web’s first-ever multi-lingual Internet search. Babel Fish could translate words, phrases, or entire website to and from English, Spanish, French, German, Portuguese, Italian, and Russian. With its advanced multimedia search capabilities and language translation and recognition services, AltaVista was likely the Internet’s most technologically advanced search tool at the time.
In February 2003, Overture purchased AltaVista for $140 million. This price was a fraction of AltaVista’s valuation of $2.3 billion three years earlier. Unfortunately, AltaVista’s likelihood to continue its innovative heritage with Overture Services, Inc. was quickly killed when Yahoo! purchased Overture at the end of 2003. With this purchase, AltaVista became Yahoo! using the same search index and user interface. In May 2008, AltaVista’s Babel Fish translation service was also branded under its parent company as Yahoo! Babel Fish.

History Search Engines - Infoseek

Infoseek – Born in 1994

 
Infoseek, also known as the “big yellow,” was founded by Steve Kirsch in 1994. At the start in January 1994, InfoSeek was a pay-for-use service. The fee service was dropped in August 1994 and Infoseek was re-presented as Infoseek Search in February 1995.

Infoseek’s position in the search engine world was accelerated in 1995 by a deal with Netscape in which it became the default Netscape search engine. In June 1996, Infoseek went public and by September the next year served 7.3 million visitors per month.
Infoseek uniquely featured a very complex system of search modifiers including Boolean modifiers. In November 1996, InfoSeek introduced Ultrasmart/Ultraseek and redesigned its website the next year to include channel or directory information. By March of 1998, Infoseek included a search page with advanced search techniques. Infoseek also uniquely offered a free web hosting service without advertising and had no limit on the amount of storage space for users.
In the summer of 1998, 43% of Infoseek was bought by Disney. From this point forward Infoseek was part of the Disney Corporation’s vast media business. The deal included Infoseek acquisition of Starwave Corporation, including ESPN.com and ABCNews.com. Infoseek’s technology was then merged with Starwave’s to form the Go.com network. In July 1999, Disney had a 72% interest in Go.com and Disney’s media interest included Disney.com, Family.com, ABC.com, ABCNews.com, and ESPN.com.
Aside from its relationship with Disney, Infoseek continued to change. In September 1998, Infoseek began offering a combination of manually-reviewed and traditional web-search results. The next month, Infoseek took over Excite as the default search engine on Microsoft’s WebTV and Infoseek Express, a free software website, was launched. In 1999, Li Yanhong, an Infoseek engineer, moved to Beijing, China and co-founded the search engine Baidu.
In the summer of 2000, Infoseek’s Ultraseek Server software technology was sold to Inktomi and renamed Inktomi Enterprise Search. In December 2002 (prior to Yahoo!’s purchase of Inktomi) Ultraseek was sold to a competitor Verity Inc. Verity re-established the Ultraseek brand name and development of the product until it was acquired by Autonomy PLC in December 2005. Autonomy continues developing and marketing Ultraseek’s site search.
In February 2001, many Infoseek employees tried to collectively buy out Infoseek from Disney when Disney laid off the entire Infoseek staff. Today, the Infoseek.com domain forwards to Go.com and the brand name is unused in North America. Only in Japan (as Infoseek Japan) and Australia (Infoseek Australia) is the Infoseek name used.

History Search Engines - Lycos

Lycos – Born in 1994

 Lycos was one of the earliest search engines developed in 1994 at Carnegie Mellon University by Dr. Michael Mauldin and a team of researchers. The Lycos name came from the Latin term “lycosidea” referencing wolf spiders that hunt and actively stalk prey. The company was founded on $2 million in venture capital funding from CMGI. The company was headed by Bob Davis, who concentrated on building Lycos into an advertising-supported web portal. The company went public in April of the next year with little money. With phenomenal growth in its catalog, Lycos had the largest index at the end of 1996 with 60 million documents. In 1997, Lycos Pro was launched with a new search algorithm and continued to grow. By 1999, Lycos would emerge from a crowded pack to become the most-visited web portal. Over the next few years, Lycos would become one of the most profitable Internet businesses and acquired nearly two dozen high profile Internet brands.

For example, in February 1998, Lycos acquired Tripod Inc., a website where people built their own web pages. That summer, language search was introduced and the search results pages were redesigned. At this same time, WhoWhere Inc.’s directory services and Mail City’s email services were acquired for $133 million in stock. Toward the end of 1998, Lycos acquired Wired Digital (owner of HotBot) for $83 million.
By 1999, Lycos was one of the most visited search tools on the web and would continue to be involved in new projects and acquisitions. In February that same year, Lycos became USA/Lycos Interactive Networks when USA Networks bought a 61.5% ownership in the company for $18 billion. As a result, Lycos later announced a project to create a search tool to query information on USATODAY.com’s news site. In April 1999, Lycos joined the Open Directory Project run by Netscape. In June 1999, Lycos joined with Intelliseek to provide a directory of over 7,400 databases previously not on the web. In September 1999, Lycos acquired Quote.com, an investment information site and launched the Lycos Zone, an educational website for kids with content filtering. Finally, in December 1999, Lycos invested in FAST search technology which began powering Lycos advanced search technology.
In May 2000, Terra Networks, an Internet arm of the Spanish telecommunications giant Telef?nica, purchased Lycos for $5.4 billion, forming a new company Terra Lycos. This takeover marked a 3,000 times return on the initial venture capital investment and 20 times the initial public offering value. Lycos remained the name of the US franchise brand. Overseas, the company was known as Terra Networks, and founder Bob Davis left the company.
Lycos suffered from the dotcom crash in 2001. In late 2001, Lycos abandoned its own crawler and began serving results exclusively from FAST. In August 2004, Terra sold Lycos to Daum Communications Corporation for $95.4 million. This low number was less than 2% of Terra’s initial investment. By October 2004, the deal was finalized and the company name returned to Lycos.
With new ownership, Lycos refocused on becoming a community destination for broadband entertainment rather than a search portal. In July 2006, Wired News, which had been a part of Lycos since the acquisition of Wired Digital in 1998, was sold. The Lycos Finance division, known for Quote.com and Raging Bull.com, and it’s online dating site, Matchmaker.com, were also sold. Lycos also regained ownership of the Lycos trademark from Carnegie Mellon University allowing it to become Lycos, Inc.
Since 2006, Lycos has announced the introduction of Lycos Phone, Lycos Mail, and Lycos MIX. These services and tools combine IM video chat, mp3 player, and unlimited file size sending and receiving via email, video watch and chat, and social media applications. Lycos remains a top 25 Internet destination, is the 13th largest online property worldwide and remains a top 5 Internet portal behind Yahoo!, MSN, AOL, and MySpace.

History Search Engines - WebCrawler

WebCrawler – Born in 1994

 WebCrawler was the first search engine to provide full text search. In 1994, Brian Pinkerton, a Computer Science and Engineering student at the University of Washington, used his spare time to create WebCrawler. With WebCrawler, Brian generated a list of the Top 25 websites on March 15, 1994. Only a month later on April 20, 1994, Brian announced the release of WebCrawler live on the web with a database of 4000 websites. On June 11, 1994, Brian posted to the Usenet group comp.infosystems.announce that the WebCrawler Index was available for searching. By November 14, 1994, WebCrawler served its one millionth query. By the end of the year, WebCrawler signed two sponsors, DealerNet and Starwave providing needed capital to finance WebCrawler. A little less than a year later, WebCrawler was fully operating on advertising revenue.

 A young America Online, without access to the web, acquired WebCrawler on June 1, 1995. On September 4, 1995, Spidey was created as WebCrawler’s mascot. On April 1, 1997 (no fooling), WebCrawler was sold by AOL to Excite. Initially, WebCrawler was going to run by its own dedicated team within Excite, but eventually the two were merged together on the same back end. In 2001, Excite went bankrupt and was purchased by InfoSpace. As part of the agreement, InfoSpace acquired WebCrawler. Today, InfoSpace runs WebCrawler as a meta-search tool blending results from Google, Yahoo!, Live Search (formerly MSN Search), Ask, About.com, MIVA, LookSmart, and others. As for Spidey, he is now purple.

History Search Engines - Yahoo !

Yahoo! - Born in 1994

 
David Filo and Jerry Yang started Yahoo! in 1994. Originally it was a highly regarded directory of sites that were cataloged by human editors. This directory provided an extensive listing of websites supported by a network of regional directories. In 2001, Yahoo! started charging a fee for inclusion in its directory listing. Yahoo!’s action helped control the number of sites listed and helped cover costs with additional revenue.

Initially, Yahoo! used secondary search engine services to support its directory. Partnerships have included agreements with Inktomi and Google. In October 2002, Yahoo shifted to crawler-based listing for its search results. In 2004, Yahoo! purchased Overture’s pay-per-click service, which had only months earlier purchased AltaVista and AlltheWeb, and Inktomi’s search database. With these acquisitions, Yahoo! combined these tools to create its own search index. Today, Overture is renamed Yahoo! Search Marketing and provides paid search advertising revenue. The Yahoo! Directory remains one of the top indexes powering search listings.

History Search Engines - Excite

Excite – Born in 1993

 Excite was born in February 1993 as a university project called Architext involving six undergraduate students at Stanford seeking to use statistical analysis of word relationships to improve relevancy of searches on the Internet. This school project eventually led to Excite’s commercial release as a crawling search engine at the end of 1995. With solid growth in 1996, Excite purchased WebCrawler and Magellan. Toward the end of the 1990s, Excite partnered with MSN and Netscape providing search services. In 1999, Excite was sold to broadband provider @Home.com (later becoming Excite@Home) as part of a $6.7 billion merger after its traffic started to decline with the release of Google in 1998. With significant debts, Excite@Home filed for bankruptcy in October 2001 and sold its high-speed network to AT&T for $307 million. A month later, InfoSpace made a $10 million bid to buy Excite@Home’s assets including domain names and trademarks from bankruptcy court. Infospace’s offer was accepted and they subsequently powered the Excite web site and sold portal components to iWon. InfoSpace’s Dogpile crawler replaced Excite’s making Dogpile and Excite’s search results the same. Both Excite and Dogpile are also powered by LookSmart’s directory, except that Dogpile includes a number of other InfoSpace directories. Also as part of the deal, InfoSpace acquired rights to WebCrawler. Ask Jeeves (now Ask.com) purchased the Excite.com portal in 2004. Now, Excite offers search results through a metasearch tool combining results from pay-per-click and natural search tools.

How Do Search Engines Work

Without sophisticated search engines, it would be virtually impossible to locate anything on the Web without knowing a specific URL.

Search engines are the key to finding specific information on the vast expanse of the World Wide Web. Without sophisticated search engines, it would be virtually impossible to locate anything on the Web without knowing a specific URL. But do you know how search engines work? And do you know what makes some search engines more effective than others? When people use the term search engine in relation to the Web, they are usually referring to the actual search forms that searches through databases of HTML documents, initially gathered by a robot.
There are basically three types of search engines: Those that are powered by robots (called crawlers; ants or spiders) and those that are powered by human submissions; and those that are a hybrid of the two.
Crawler-based search engines are those that use automated software agents (called crawlers) that visit a Web site, read the information on the actual site, read the site's meta tags and also follow the links that the site connects to performing indexing on all linked Web sites as well. The crawler returns all that information back to a central depository, where the data is indexed. The crawler will periodically return to the sites to check for any information that has changed. The frequency with which this happens is determined by the administrators of the search engine.

Human-powered search engines rely on humans to submit information that is subsequently indexed and catalogued. Only information that is submitted is put into the index.
In both cases, when you query a search engine to locate information, you're actually searching through the index that the search engine has created —you are not actually searching the Web. These indices are giant databases of information that is collected and stored and subsequently searched. This explains why sometimes a search on a commercial search engine, such as Yahoo! or Google, will return results that are, in fact, dead links. Since the search results are based on the index, if the index hasn't been updated since a Web page became invalid the search engine treats the page as still an active link even though it no longer is. It will remain that way until the index is updated.
So why will the same search on different search engines produce different results? Part of the answer to that question is because not all indices are going to be exactly the same. It depends on what the spiders find or what the humans submitted. But more important, not every search engine uses the same algorithm to search through the indices. The algorithm is what the search engines use to determine the relevance of the information in the index to what the user is searching for.
One of the elements that a search engine algorithm scans for is the frequency and location of keywords on a Web page. Those with higher frequency are typically considered more relevant. But search engine technology is becoming sophisticated in its attempt to discourage what is known as keyword stuffing, or spamdexing.
Another common element that algorithms analyze is the way that pages link to other pages in the Web. By analyzing how pages link to each other, an engine can both determine what a page is about (if the keywords of the linked pages are similar to the keywords on the original page) and whether that page is considered "important" and deserving of a boost in ranking. Just as the technology is becoming increasingly sophisticated to ignore keyword stuffing, it is also becoming more savvy to Web masters who build artificial links into their sites in order to build an artificial ranking.
Did You Know...
The first tool for searching the Internet, created in 1990, was called "Archie". It downloaded directory listings of all files located on public anonymous FTP servers; creating a searchable database of filenames. A year later "Gopher" was created. It indexed plain text documents. "Veronica" and "Jughead" came along to search Gopher's index systems. The first actual Web search engine was developed by Matthew Gray in 1993 and was called "Wandex". [Source ]
Webopedia: Internet and Online Services > Internet > World Wide Web > Search Engines
Key Terms To Understanding Web Search Engines spider trap
A condition of dynamic Web sites in which a search engine’s spider becomes trapped in an endless loop of code.
search engine
A program that searches documents for specified keywords and returns a list of the documents where the keywords were found.
meta tag
A special HTML tag that provides information about a Web page.
deep link
A hyperlink either on a Web page or in the results of a search engine query to a page on a Web site other than the site’s home page.
robot
A program that runs automatically without human intervention.

Why did Search Engines start

The very first tool used for searching on the Internet was called "Archie". (The name stands for "archives" without the "v", not the kid from the comics). It was created in 1990 by Alan Emtage, a student at McGill University in Montreal. The program downloaded the directory listings of all the files located on public anonymous FTP (File Transfer Protocol) sites, creating a searchable database of filenames.
While Archie indexed computer files, "Gopher" indexed plain text documents. Gopher was created in 1991 by Mark McCahill at the University of Minnesota. (The program was named after the school's mascot). Because these were text files, most of the Gopher sites became Web sites after the creation of the World Wide Web.
Two other programs, "Veronica" and "Jughead," searched the files stored in Gopher index systems. Veronica (Very Easy Rodent-Oriented Net-wide Index to Computerized Archives) provided a keyword search of most Gopher menu titles in the entire Gopher listings. Jughead (Jonzy's Universal Gopher Hierarchy Excavation And Display) was a tool for obtaining menu information from various Gopher servers.

I, Robot

In 1993, MIT student Matthew Gray created what is considered the first robot, called World Wide Web Wanderer. It was initially used for counting Web servers to measure the size of the Web. The Wanderer ran monthly from 1993 to 1995. Later, it was used to obtain URLs, forming the first database of Web sites called Wandex.
According to The Web Robots FAQ, "A robot is a program that automatically traverses the Web's hypertext structure by retrieving a document, and recursively retrieving all documents that are referenced. Web robots are sometimes referred to as web wanderers, web crawlers, or spiders. These names are a bit misleading as they give the impression the software itself moves between sites like a virus; this not the case, a robot simply visits sites by requesting documents from them."
Initially, the robots created a bit of controversy as they used large amounts of bandwidth, sometimes causing the servers to crash. The newer robots have been tweaked and are now used for building most search engine indexes.
In 1993, Martijn Koster created ALIWEB (Archie-Like Indexing of the Web). ALIWEB allowed users to submit their own pages to be indexed. According to Koster, "ALIWEB was a search engine based on automated meta-data collection, for the Web."

Enter the Accountants

Eventually, as it seemed that the Web might be profitable, investors started to get involved and search engines became big business.
Excite was introduced in 1993 by six Stanford University students. It used statistical analysis of word relationships to aid in the search process. Within a year, Excite was incorporated and went online in December 1995. Today it's a part of the AskJeeves company.
EINet Galaxy (Galaxy) was established in 1994 as part of the MCC Research Consortium at the University of Texas, in Austin. It was eventually purchased from the University and, after being transferred through several companies, is a separate corporation today. It was created as a directory, containing Gopher and telnet search features in addition to its Web search feature.
Jerry Yang and David Filo created Yahoo in 1994. It started out as a listing of their favorite Web sites. What made it different was that each entry, in addition to the URL, also had a description of the page. Within a year the two received funding and Yahoo, the corporation, was created.
Later in 1994, WebCrawler was introduced. It was the first full-text search engine on the Internet; the entire text of each page was indexed for the first time.
Lycos introduced relevance retrieval, prefix matching, and word proximity in 1994. It was a large search engine, indexing over 60 million documents in 1996; the largest of any search engine at the time. Like many of the other search engines, Lycos was created in a university atmosphere at Carnegie Mellon University by Dr. Michael Mauldin.
Infoseek went online in 1995. It didn't really bring anything new to the search engine scene. It is now owned by the Walt Disney Internet Group and the domain forwards to Go.com.
Alta Vista also began in 1995. It was the first search engine to allow natural language inquires and advanced searching techniques. It also provides a multimedia search for photos, music, and videos.
Inktomi started in 1996 at UC Berkeley. In June of 1999 Inktomi introduced a directory search engine powered by "concept induction" technology. "Concept induction," according to the company, "takes the experience of human analysis and applies the same habits to a computerized analysis of links, usage, and other patterns to determine which sites are most popular and the most productive." Inktomi was purchased by Yahoo in 2003.
AskJeeves and Northern Light were both launched in 1997.
Google was launched in 1997 by Sergey Brin and Larry Page as part of a research project at Stanford University. It uses inbound links to rank sites. In 1998 MSN Search and the Open Directory were also started. The Open Directory, according to its Web site, "is the largest, most comprehensive human-edited directory of the Web. It is constructed and maintained by a vast, global community of volunteer editors." It seeks to become the "definitive catalog of the Web." The entire directory is maintained by human input.

Where did Search Engines Start

In 1957, after the U.S.S.R. launched Sputnik (the first artificial earth satellite), the United States created the Advanced Research Projects Agency (ARPA) as a part of the Department of Defense. Its purpose was to establish U.S. leadership in science and technology applicable to the military.
Part of ARPA's work was to prepare a plan for the United States to maintain control over its missiles and bombers after a nuclear attack. Through this work the ARPANET — a.k.a. the Internet — was born. The first ARPANET connections were made in 1969 and in October 1972 ARPANET went 'public.'
Almost 20 years after the creation of the Internet, the World Wide Web was born to allow the public exchange of information on a global basis. It was built on the backbone of the Internet.
According to Tim Berners-Lee, creator of the World Wide Web, "The Internet [Net] is a network of networks. Basically it is made from computers and cables.... The [World Wide] Web is an abstract imaginary space of information. On the Net, you find computers — on the Web, you find documents, sounds, videos, ... information. On the Net, the connections are cables between computers; on the Web, connections are hypertext links. The Web exists because of programs which communicate between computers on the Net. The Web could not be without the Net. The Web made the Net useful because people are really interested in information and don't really want to have [to] know about computers and cables."
With information being shared worldwide, there was eventually a need to find that information in an orderly manner.

When did Internet Search Engines start

The goal of all search engines is to find and organize distributed data found on the Internet. Before search engines were developed, the Internet was a collection of File Transfer Protocol (FTP) sites in which users would navigate to find specific shared files. As the central list of web servers joining the Internet grew, and the World Wide Web became the interface of choice for accessing the Internet, the need for finding and organizing the distributed data files on FTP web servers grew. Search engines began due to this need to more easily navigate the web servers and files on the Internet.
The first search engine was developed as a school project by Alan Emtage, a student at McGill University in Montreal. Back in 1990, Alan created Archie, an index (or archives) of computer files stored on anonymous FTP web sites in a given network of computers (“Archie” rather than “Archives” fit name length parameters – thus it became the name of the first search engine). In 1991, Mark McCahill, a student at the University of Minnesota, effectively used a hypertext paradigm to create Gopher, which also searched for plain text references in files.
Archie and Gopher’s searchable database of websites did not have natural language keyword capabilities used in modern search engines. Rather, in 1993 the graphical Mosaic web browser improved upon Gopher’s primarily text-based interface. About the same time, Matthew Gray developed Wandex, the first search engine in the form that we know search engines today. Wandex’s technology was the first to crawl the web indexing and searching the catalog of indexed pages on the web. Another significant development in search engines came in 1994 when WebCrawler’s search engine began indexing the full text of web sites instead of just web page titles.
While both web directories and search engines gained popularity in the 1990s, search engines developed a life of their own becoming the preferred method of Internet search. For example, the major search engines found in use today originated in development between 1993 and 1998.

What happened to Search Engines



In season 4 of the sitcom “Parks and Recreation” there was a running joke about all the citizens of Pawnee, Ind., still using the AltaVista search engine. Alta Vista was one of the earliest portals to help computer users find stuff on the Internet.
Whatever happened to it and all those other early search engines? In a word, Google is what happened. Before Google there were many portals vying for consumer eyeballs.
According to the 1998 book “The AltaVista Search Revolution,”researchers at Digital Equipment Corporation developed the search engine in the early 1990s as a way to make it easier to find files on the public network.
At about that time the Internet started to go mainstream, with consumers using their Netscape browser – which cost $40, by the way – to access AltaVista and the wonders of the World Wide Web.
Today, no matter what search engine you use, you won't find AltaVista. It was purchased by Yahoo! in 2003, which retained the name for a while before finally giving up. Today, when you type in the AltaVista URL you are taken directly to Yahoo!.

Lycos

While AltaVista was a casualty of Google some other early search engines are still around in one form or another. Remember Lycos?
Lycos stated life as a research project at Carnegie Mellon University in 1994 before attracting some venture capital. The company went public two years later and by 1998 was considered one of the first companies to become profitable on the Internet.
In 2000 it sold to a Spanish company for $12.5 billion and 4 years later changed hands again, this time for $95 million.
Lycos sold again in 2010 for just $35 million but then refocused its business. Today Lycos provides a network of email, web hosting, social networking, and entertainment websites. It also claims an average of more than 15 million unique visitors in the U.S.

Excite

Excitelaunched in 1995 as both a search engine and content aggregator. The company went public in 1996 and was a hot Internet stock for the rest of the decade.
There is an urban legend in the tech world that Google creators Sergey Brin and Larry Page once offered to sell their search engine to Excite for $1 million, but Excite wasn't interested.
Over the years that followed, Excite changed hands a number of times as it quickly lost market share but remains today pretty much the site it was in the 1990s, offering email and information content, as well as search.
At one point Excite was purchased by AskJeeves.com, another popular search site in its day. Today, Ask Jeeves is simply Ask.com, which has an impressive Alexa ranking of 97 in the U.S

Ask Jeeves

Ask Jeeves came up with a different form of search, allowing users to pose questions rather than entering just a string of search terms. The company says it now reaches 100 million global users each month, pretty impressive in a world dominated by Google.
Yahoo! is the other pre-Google search engine that has managed to not only survive but carve out a significant place for itself in the online world. It remains a profitable company, due in part to its astute investment in the Chinese online retailer, Alibaba.

Google

The owners of Google soon realized that their company was worth a lot more than $1 million, especially when it experienced huge growth in the early 2000s. It went public in 2004 and has never looked back.
It is believed to operate more than one million servers in data centers around the world and to process over one billion search requests a day. It holds an Alexa ranking of 1, both in the U.S. and globally and is now involved in a lot more than search – everything from robots to driverless cars.
What did Google bring to the table in the early 2000s that its competitors didn't? A lot has been written about its search algorithms but something else may have helped too.
Remember that in the early 2000s most consumers were still on dial-up connections. Google's page was and is extremely simple – its logo and a search box. It took no time to load, unlike its competitors, whose pages contained lots of text and data-rich graphics, not to mention ads.
Today the search wars are thought to be between Google, Yahoo! and Bing, Micosoft's entry into the fray. But it might be fun to get reaquainted with the search engines from yesteryear, which present a refreshing option when exploring what we once so quaintly called the World Wide Web.

 


Who created Search Engine

The concept of hypertext and a memory extension really came to life in July of 1945, when after enjoying the scientific camaraderie that was a side effect of WWII, Vannevar Bush's As We May Think was published in The Atlantic Monthly.

He urged scientists to work together to help build a body of knowledge for all mankind. Here are a few selected sentences and paragraphs that drive his point home.
Specialization becomes increasingly necessary for progress, and the effort to bridge between disciplines is correspondingly superficial.
The difficulty seems to be, not so much that we publish unduly in view of the extent and variety of present day interests, but rather that publication has been extended far beyond our present ability to make real use of the record. The summation of human experience is being expanded at a prodigious rate, and the means we use for threading through the consequent maze to the momentarily important item is the same as was used in the days of square-rigged ships.
A record, if it is to be useful to science, must be continuously extended, it must be stored, and above all it must be consulted.
He not only was a firm believer in storing data, but he also believed that if the data source was to be useful to the human mind we should have it represent how the mind works to the best of our abilities.
Our ineptitude in getting at the record is largely caused by the artificiality of the systems of indexing. ... Having found one item, moreover, one has to emerge from the system and re-enter on a new path.
The human mind does not work this way. It operates by association. ... Man cannot hope fully to duplicate this mental process artificially, but he certainly ought to be able to learn from it. In minor ways he may even improve, for his records have relative permanency.
Presumably man's spirit should be elevated if he can better review his own shady past and analyze more completely and objectively his present problems. He has built a civilization so complex that he needs to mechanize his records more fully if he is to push his experiment to its logical conclusion and not merely become bogged down part way there by overtaxing his limited memory.
He then proposed the idea of a virtually limitless, fast, reliable, extensible, associative memory storage and retrieval system. He named this device a memex.

Gerard Salton (1960s - 1990s):

Gerard Salton, who died on August 28th of 1995, was the father of modern search technology. His teams at Harvard and Cornell developed the SMART informational retrieval system. Salton’s Magic Automatic Retriever of Text included important concepts like the vector space model, Inverse Document Frequency (IDF), Term Frequency (TF), term discrimination values, and relevancy feedback mechanisms.
He authored a 56 page book called A Theory of Indexing which does a great job explaining many of his tests upon which search is still largely based. Tom Evslin posted a blog entry about what it was like to work with Mr. Salton.

Ted Nelson:

Ted Nelson created Project Xanadu in 1960 and coined the term hypertext in 1963. His goal with Project Xanadu was to create a computer network with a simple user interface that solved many social problems like attribution.
While Ted was against complex markup code, broken links, and many other problems associated with traditional HTML on the WWW, much of the inspiration to create the WWW was drawn from Ted's work.
There is still conflict surrounding the exact reasons why Project Xanadu failed to take off.
The Wikipedia offers background and many resource links about Mr. Nelson.

Advanced Research Projects Agency Network:

ARPANet is the network which eventually led to the internet. The Wikipedia has a great background article on ARPANet and Google Video has a free interesting video about ARPANet from 1972.

Archie (1990):


The first few hundred web sites began in 1993 and most of them were at colleges, but long before most of them existed came Archie. The first search engine created was Archie, created in 1990 by Alan Emtage, a student at McGill University in Montreal. The original intent of the name was "archives," but it was shortened to Archie.
Archie helped solve this data scatter problem by combining a script-based data gatherer with a regular expression matcher for retrieving file names matching a user query. Essentially Archie became a database of web filenames which it would match with the users queries.
Bill Slawski has more background on Archie here.

Veronica & Jughead:

As word of mouth about Archie spread, it started to become word of computer and Archie had such popularity that the University of Nevada System Computing Services group developed Veronica. Veronica served the same purpose as Archie, but it worked on plain text files. Soon another user interface name Jughead appeared with the same purpose as Veronica, both of these were used for files sent via Gopher, which was created as an Archie alternative by Mark McCahill at the University of Minnesota in 1991.

File Transfer Protocol:

Tim Burners-Lee existed at this point, however there was no World Wide Web. The main way people shared data back then was via File Transfer Protocol (FTP).
If you had a file you wanted to share you would set up an FTP server. If someone was interested in retrieving the data they could using an FTP client. This process worked effectively in small groups, but the data became as much fragmented as it was collected.

Tim Berners-Lee & the WWW (1991):

 

While an independent contractor at CERN from June to December 1980, Berners-Lee proposed a project based on the concept of hypertext, to facilitate sharing and updating information among researchers. With help from Robert Cailliau he built a prototype system named Enquire.
After leaving CERN in 1980 to work at John Poole's Image Computer Systems Ltd., he returned in 1984 as a fellow. In 1989, CERN was the largest Internet node in Europe, and Berners-Lee saw an opportunity to join hypertext with the Internet. In his words, "I just had to take the hypertext idea and connect it to the TCP and DNS ideas and — ta-da! — the World Wide Web". He used similar ideas to those underlying the Enquire system to create the World Wide Web, for which he designed and built the first web browser and editor (called WorldWideWeb and developed on NeXTSTEP) and the first Web server called httpd (short for HyperText Transfer Protocol daemon).
The first Web site built was at http://info.cern.ch/ and was first put online on August 6, 1991. It provided an explanation about what the World Wide Web was, how one could own a browser and how to set up a Web server. It was also the world's first Web directory, since Berners-Lee maintained a list of other Web sites apart from his own.
In 1994, Berners-Lee founded the World Wide Web Consortium (W3C) at the Massachusetts Institute of Technology.

Parts of a Search Engine:

Search engines consist of 3 main parts. Search engine spiders follow links on the web to request pages that are either not yet indexed or have been updated since they were last indexed. These pages are crawled and are added to the search engine index (also known as the catalog). When you search using a major search engine you are not actually searching the web, but are searching a slightly outdated index of content which roughly represents the content of the web. The third part of a search engine is the search interface and relevancy software. For each search query search engines typically do most or all of the following
  • Accept the user inputted query, checking to match any advanced syntax and checking to see if the query is misspelled to recommend more popular or correct spelling variations.
  • Check to see if the query is relevant to other vertical search databases (such as news search or product search) and place relevant links to a few items from that type of search query near the regular search results.
  • Gather a list of relevant pages for the organic search results. These results are ranked based on page content, usage data, and link citation data.
  • Request a list of relevant ads to place near the search results.
Searchers generally tend to click mostly on the top few search results, as noted in this article by Jakob Nielsen, and backed up by this search result eye tracking study.

Want to learn more about how search engines work?

Types of Search Queries:

Andrei Broder authored A Taxonomy of Web Search [PDF], which notes that most searches fall into the following 3 categories:
  • Informational - seeking static information about a topic
  • Transactional - shopping at, downloading from, or otherwise interacting with the result
  • Navigational - send me to a specific URL

Improve Your Searching Skills:

Want to become a better searcher? Most large scale search engines offer:
  • Advanced search pages which help searchers refine their queries to request files which are newer or older, local or in nature, from specific domains, published in specific formats, or other ways of refining search, for example the ~ character means related to Google.
  • Vertical search databases which may help structure the information index or limit the search index to a more trusted or better structured collection of sources, documents, and information.
Nancy Blachman's Google Guide offers searchers free Google search tips, and Greg R.Notess's Search Engine Showdown offers a search engine features chart.
There are also many popular smaller vertical search services. For example, Del.icio.us allows you to search URLs that users have bookmarked, and Technorati allows you to search blogs.

World Wide Web Wanderer:

Soon the web's first robot came. In June 1993 Matthew Gray introduced the World Wide Web Wanderer. He initially wanted to measure the growth of the web and created this bot to count active web servers. He soon upgraded the bot to capture actual URL's. His database became knows as the Wandex.
The Wanderer was as much of a problem as it was a solution because it caused system lag by accessing the same page hundreds of times a day. It did not take long for him to fix this software, but people started to question the value of bots.