394 research outputs found

    Lumbricus webis: a parallel and distributed crawling architecture for the Italian web

    Get PDF
    Web crawlers have become popular tools for gattering large portions of the web that can be used for many tasks from statistics to structural analysis of the web. Due to the amount of data and the heterogeneity of tasks to manage, it is essential for crawlers to have a modular and distributed architecture. In this paper we describe Lumbricus webis (short L.webis) a modular crawling infrastructure built to mine data from the web domain ccTLD .it and portions of the web reachable from this domain. Its purpose is to support gathering of advanced statics and advanced analytic tools on the content of the Italian Web. This paper describes the architectural features of L.webis and its performance. L.webis can currently download a mid-sized ccTLD such as ".it" in about one week

    System RDC: Relevant Data Condenser A Knowledge System for a Cloud-immersed Culture

    Get PDF
    Abstract Knowledge representation is an area of artificial intelligence that manages knowledge by most efficiently making the knowledge internally and externally representable, accessible, and usable. Among many other things, this management entails the acquisition, translation, longevity, and structural representation of the knowledge being represented. Our world has become one immersed in its knowledge by the plethora of physical repositories, sources, and subjects. This research addresses the ramifications of this immersion by asking the question of "how do these new characteristics of knowledge affect our computer system specifications and implementations?" In this work, we delineate the characteristics of knowledge in the 2010s and present our response to them. System "Relevant Data Condenser (RDC)" is a cloud-savvy app for a smartphone that symbolizes the new computer system that is globally informed, locally understandable, lean, and portable. This endeavor is part of an overall project that involves knowledge representation in the Cloud. In this paper, we present the development framework and system architecture of RDC, including the incremental steps of its development process

    Incremental Web Crawling With Bubing System

    Get PDF
    Tato bakalářská práce se zabývá úpravou systému BUbiNG pro takzvané inkrementální stahování. V práci jsou dále popsány hlavní problémy spojené s inkrementálním stahováním internetu a využití dalších open-source systémů pro inkrementální stahování. Upravený systém podporuje opětovné navštěvování stránek pomocí dvou běžně používaných strategií. První ze strategií opětovně navštěvuje stránku vždy po stejném intervalu. Druhá strategie přizpůsobuje interval mezi návštěvami podle frekvence změn stránky.This bachelor thesis deals with modification of BUbiNG system for incremental crawling. The paper describes the main problems related to incremental Internet crawling and the use of other open-source systems for incremental crawling. As a result, BUbiNG system supports re-visiting pages using two commonly used strategies. The first strategy always re-visits page after the same interval. The second strategy adjusts the interval between visits according to the frequency of page changes.

    Towards gathering and mining last.fm user-generated data

    Get PDF
    Estágio realizado no INESCTese de mestrado integrado. Engenharia Informática e Computação. Faculdade de Engenharia. Universidade do Porto. 200

    Effective web crawlers

    Get PDF
    Web crawlers are the component of a search engine that must traverse the Web, gathering documents in a local repository for indexing by a search engine so that they can be ranked by their relevance to user queries. Whenever data is replicated in an autonomously updated environment, there are issues with maintaining up-to-date copies of documents. When documents are retrieved by a crawler and have subsequently been altered on the Web, the effect is an inconsistency in user search results. While the impact depends on the type and volume of change, many existing algorithms do not take the degree of change into consideration, instead using simple measures that consider any change as significant. Furthermore, many crawler evaluation metrics do not consider index freshness or the amount of impact that crawling algorithms have on user results. Most of the existing work makes assumptions about the change rate of documents on the Web, or relies on the availability of a long history of change. Our work investigates approaches to improving index consistency: detecting meaningful change, measuring the impact of a crawl on collection freshness from a user perspective, developing a framework for evaluating crawler performance, determining the effectiveness of stateless crawl ordering schemes, and proposing and evaluating the effectiveness of a dynamic crawl approach. Our work is concerned specifically with cases where there is little or no past change statistics with which predictions can be made. Our work analyses different measures of change and introduces a novel approach to measuring the impact of recrawl schemes on search engine users. Our schemes detect important changes that affect user results. Other well-known and widely used schemes have to retrieve around twice the data to achieve the same effectiveness as our schemes. Furthermore, while many studies have assumed that the Web changes according to a model, our experimental results are based on real web documents. We analyse various stateless crawl ordering schemes that have no past change statistics with which to predict which documents will change, none of which, to our knowledge, has been tested to determine effectiveness in crawling changed documents. We empirically show that the effectiveness of these schemes depends on the topology and dynamics of the domain crawled and that no one static crawl ordering scheme can effectively maintain freshness, motivating our work on dynamic approaches. We present our novel approach to maintaining freshness, which uses the anchor text linking documents to determine the likelihood of a document changing, based on statistics gathered during the current crawl. We show that this scheme is highly effective when combined with existing stateless schemes. When we combine our scheme with PageRank, our approach allows the crawler to improve both freshness and quality of a collection. Our scheme improves freshness regardless of which stateless scheme it is used in conjunction with, since it uses both positive and negative reinforcement to determine which document to retrieve. Finally, we present the design and implementation of Lara, our own distributed crawler, which we used to develop our testbed

    On URL and content persistence

    Get PDF
    This report presents a study of URL and content persistence among 51 million pages from a national web harvested 8 times over almost 3 years. This study differs from previous ones because it describes the evolution of a large set of web pages for several years, studying in depth the characteristics of persistent data. We found that the persistence of URLs and contents follows a logarithmic distribution. We characterized persistent URLs and contents, and identified reasons for URL death. We found that lasting contents tend to be referenced by different URLs during their lifetime. On the other hand, half of the contents referenced by persistent URLs did not chang

    Acquisition des contenus intelligents dans l’archivage du Web

    Get PDF
    Web sites are dynamic by nature with content and structure changing overtime; many pages on the Web are produced by content management systems (CMSs). Tools currently used by Web archivists to preserve the content of the Web blindly crawl and store Web pages, disregarding the CMS the site is based on and whatever structured content is contained in Web pages. We first present an application-aware helper (AAH) that fits into an archiving crawl processing chain to perform intelligent and adaptive crawling of Web applications, given a knowledge base of common CMSs. The AAH has been integrated into two Web crawlers in the framework of the ARCOMEM project: the proprietary crawler of the Internet Memory Foundation and a customized version of Heritrix. Then we propose an efficient unsupervised Web crawling system ACEBot (Adaptive Crawler Bot for data Extraction), a structure-driven crawler that utilizes the inner structure of the pages and guides the crawling process based on the importance of their content. ACEBot works intwo phases: in the offline phase, it constructs a dynamic site map (limiting the number of URLs retrieved), learns a traversal strategy based on the importance of navigation patterns (selecting those leading to valuable content); in the online phase, ACEBot performs massive downloading following the chosen navigation patterns. The AAH and ACEBot makes 7 and 5 times, respectively, fewer HTTP requests as compared to a generic crawler, without compromising on effectiveness. We finally propose OWET (Open Web Extraction Toolkit) as a free platform for semi-supervised data extraction. OWET allows a user to extract the data hidden behind Web formsLes sites Web sont par nature dynamiques, leur contenu et leur structure changeant au fil du temps ; de nombreuses pages sur le Web sont produites par des systèmes de gestion de contenu (CMS). Les outils actuellement utilisés par les archivistes du Web pour préserver le contenu du Web collectent et stockent de manière aveugle les pages Web, en ne tenant pas compte du CMS sur lequel le site est construit et du contenu structuré de ces pages Web. Nous présentons dans un premier temps un application-aware helper (AAH) qui s’intègre à une chaine d’archivage classique pour accomplir une collecte intelligente et adaptative des applications Web, étant donnée une base de connaissance deCMS courants. L’AAH a été intégrée à deux crawlers Web dans le cadre du projet ARCOMEM : le crawler propriétaire d’Internet Memory Foundation et une version personnalisée d’Heritrix. Nous proposons ensuite un système de crawl efficace et non supervisé, ACEBot (Adaptive Crawler Bot for data Extraction), guidé par la structure qui exploite la structure interne des pages et dirige le processus de crawl en fonction de l’importance du contenu. ACEBot fonctionne en deux phases : dans la phase hors-ligne, il construit un plan dynamique du site (en limitant le nombre d’URL récupérées), apprend une stratégie de parcours basée sur l’importance des motifs de navigation (sélectionnant ceux qui mènent à du contenu de valeur) ; dans la phase en-ligne, ACEBot accomplit un téléchargement massif en suivant les motifs de navigation choisis. L’AAH et ACEBot font 7 et 5 fois moins, respectivement, de requêtes HTTP qu’un crawler générique, sans compromis de qualité. Nous proposons enfin OWET (Open Web Extraction Toolkit), une plate-forme libre pour l’extraction de données semi-supervisée. OWET permet à un utilisateur d’extraire les données cachées derrière des formulaires Web

    Political Advocacy on the Web: Issue Networks in Online Debate Over the USA Patriot Act

    Get PDF
    This dissertation examines how people and organizations used the World Wide Web to discuss and debate a public policy in 2005, at a point of time when the Internet was viewed as a maturing medium for communication. Combining descriptive and quantitative frame analyses with an issue network analysis, the study evaluated the frames apparent in discourse concerning two key sections of the USA Patriot Act, while the issue network analysis probed hypertext linkages among Web pages where discussion was occurring. Sections 214 and 215 of the USA Patriot Act provided a contentious national issue with multiple stakeholders presumed to be attempting to frame issues connected to the two sections. The focus on two sections allowed frame and issue network contrasts to be made. The study sought evidence of an Internet effect to determine whether the Web, through the way people were using it, was having a polarizing, synthesizing, or fragmentizing effect on discussion and debate. Frame overlap and hypertext linkage patterns among actors in the issue networks indicated an overall tendency toward synthesis. The study also probed the degree to which there is a joining, or symbiosis, of Web content and structure, in part evidenced by whether patterns exist that like-minded groups are coming together to form online community through hypertext linkages. Evidence was found to support this conclusion among Web pages in several Internet domains, although questions remain about linking patterns among blogs due to limitations of the software used in the study. Organizational Web sites on average used a similar number of frames compared to other Web page types, including blogs. The organizational Web pages were found to be briefer in how they discussed issues, however. The study contributes to theory by offering the first known empirical study of online community formation and issue advocacy on a matter of public policy and through its finding of a linkage between Web content and Web structure. Methodologically, the study presents a flexible mixed-methods model of descriptive and quantitative approaches that appears excellently suited for Internet studies. The dissertation’s use of fuzzy clustering and discriminant analysis offer important improvements over existing approaches in factor-based frame analysis and frame mapping techniques
    • …
    corecore