2,327 research outputs found

    Optimal Algorithms for Crawling a Hidden Database in the Web

    Full text link
    A hidden database refers to a dataset that an organization makes accessible on the web by allowing users to issue queries through a search interface. In other words, data acquisition from such a source is not by following static hyper-links. Instead, data are obtained by querying the interface, and reading the result page dynamically generated. This, with other facts such as the interface may answer a query only partially, has prevented hidden databases from being crawled effectively by existing search engines. This paper remedies the problem by giving algorithms to extract all the tuples from a hidden database. Our algorithms are provably efficient, namely, they accomplish the task by performing only a small number of queries, even in the worst case. We also establish theoretical results indicating that these algorithms are asymptotically optimal -- i.e., it is impossible to improve their efficiency by more than a constant factor. The derivation of our upper and lower bound results reveals significant insight into the characteristics of the underlying problem. Extensive experiments confirm the proposed techniques work very well on all the real datasets examined.Comment: VLDB201

    A Brief History of Web Crawlers

    Full text link
    Web crawlers visit internet applications, collect data, and learn about new web pages from visited pages. Web crawlers have a long and interesting history. Early web crawlers collected statistics about the web. In addition to collecting statistics about the web and indexing the applications for search engines, modern crawlers can be used to perform accessibility and vulnerability checks on the application. Quick expansion of the web, and the complexity added to web applications have made the process of crawling a very challenging one. Throughout the history of web crawling many researchers and industrial groups addressed different issues and challenges that web crawlers face. Different solutions have been proposed to reduce the time and cost of crawling. Performing an exhaustive crawl is a challenging question. Additionally capturing the model of a modern web application and extracting data from it automatically is another open question. What follows is a brief history of different technique and algorithms used from the early days of crawling up to the recent days. We introduce criteria to evaluate the relative performance of web crawlers. Based on these criteria we plot the evolution of web crawlers and compare their performanc

    Performance Evaluation of Weighted Greedy Algorithm in Resource Management

    Get PDF
    Set covering is a well-studied classical problem with many applications across different fields. More recent work on this problem has taken into account the parallel computing architecture, the datasets at scale, the properties of the datasets, etc. Within the context of web crawling where the data follow the lognormal distribution, a weighted greedy algorithm has been proposed in the literature and demonstrated to outperform the traditional one. In the present work, we evaluate the performance of the weighted greedy algorithm using an open-source dataset in the context of resource management. The data are sampled from a given roadmap with 1.9 millions of nodes. Our research includes three different cost definitions i.e. location cost, driving cost and infrastructure cost. We also consider the different coverage radius to model possible parameters in the application. Our experiment results show that weighted greedy algorithm outperforms the greedy algorithm by 8% in average for all three different cost definitions

    Crawling Deep Web using a GA-based set covering algorithm

    Get PDF
    An ever-increasing amount of information on the web today is available only through search interfaces: the users have to type in a set of keywords in a search form in order to access the pages from certain web sites. These pages are often referred to as the Hidden Web or the Deep Web. According to recent studies, the content provided by hidden web sites is often of very high quality and can be extremely valuable to many users. This calls for deep web crawlers to excavate the data so that they can be reused, indexed, and searched upon in an integrated environment. Crawling deep web is the process of collecting data from search interfaces by issuing queries. It often requires the selection of an appropriate set of queries so that they can cover most of the documents in the data source with low cost. This can be modeled as a set covering problem which has been extensively studied in graph theory. The conventional set covering algorithms, however, do not work well when applied to deep web crawling due to various special features of this application domain. Typically, most set covering algorithms do not take into account the distribution of the elements being covered. For deep web crawling, the sizes of the documents and the document frequency of the queries follow the power law distribution. A new GA-based algorithm is introduced in this thesis. It targets at deep web crawling of a database with this power law distribution. The experiment shows that it outperforms the straightforward greedy algorithm previously introduced to the literature

    Variation In Greedy Approach To Set Covering Problem

    Get PDF
    The weighted set covering problem is to choose a number of subsets to cover all the elements in a universal set at the lowest cost. It is a well-studied classical problem with applications in various fields like machine learning, planning, information retrieval, facility allocation, etc. Deep web crawling refers to the process of gathering documents that have been structured into a data source and can be retrieved through a search interface. Its query selection process calls for an efficient solution to the set covering problem
    corecore