COMPARATIVE ANALYSIS OF OUR IMPLEMENTATION AND THE EXISTING DEEP WEB

298

COMPARATIVE ANALYSIS OF OUR IMPLEMENTATION AND THE EXISTING DEEP WEB

CHAPTER ONE
1.0     INTRODUCTION
The volume of information on the web is already vast and is increasing at a very fast rate according to Deepweb.com [1]. The Deep Web is a vast repository of web pages, usually generated by database-driven websites, that are available to web users yet hidden from traditional search engines. The computer program that searches the Internet for newly accessible information to be added to the index examined by a standard search tool search engine [2] used by these search engines to crawl the web cannot reach most of the pages created on-the-fly in dynamic sites such as e-commerce, news and major content sites, Deepweb.com [1].
According to a study by Bright Planet [3], the deep web is estimated to be up to 550 times larger than the ‘surface web’ accessible through traditional search engines and over 200,000 database-driven websites are affected (i.e. accessible through traditional search engines). Sherman & Price [4], estimates the amount of quality pages in the deep web to be 3 to 4 times more than those pages accessible through search engine like Google, About, Yahoo, etc. While the actual figures are debatable, it made it clear that the deep web is far bigger than the surface web, and is growing at a much faster pace, Deepweb.com [1].
In a simplified description, the web consists of these two parts: the surface Web and the deep Web (invisible Web or hidden Web) but the deep Web came into public awareness only recently with the publication of the landmark book by Sherman & Price [4], “The invisible Web: Uncovering Information Sources Search Engines Can’t See”. Since then, many books, papers and websites have emerged to help further explore this vast landscape and these needs to be brought to your notice too.
 
 

  • Statement of Problem

Most people access Web contents with Surface Search Engines and 99% of Web content is not accessible through Surface Search Engines.
A complete approach to conducting research on the Web incorporates using surface search engines and deep web databases. However, most users of the Internet are skilled in at least elementary use of search engines but the skill in accessing the deep web is limited to a much smaller population. It is desirable for most user of the Web to be enabled to access most of the Web content.  This work therefore seeks to address problems such as how Deep Web affects: search engines, websites, searchers and proffered solutions.
 

  • Objective of the study

The broad objective of this study is meant to aid IT researchers in finding quality information in less time. The main objective of the project work can be stated more clearly as follows:

  1. To describe the Deep Web and Surface Web
  2. To compare deep web and surface web
  3. To develop a piece of software to implement a Deep Web search technique

 

  • Significance of the study

The study on deep web is necessary because, it brings to focus problems encountered by search engines, websites and searchers. More importantly, the study will provide information on the results of searches made using both surface search engines and deep web search tools. Finally, it presents deep web not only as a substitute for surface search engines, but as a complement to a complete search approach that is highly relevant to the academia and the general public.

LEAVE A REPLY

Please enter your comment!
Please enter your name here