698 research outputs found

    Reducing query overhead through route learning in unstructured peer-to-peer network

    Get PDF
    Cataloged from PDF version of article.In unstructured peer-to-peer networks, such as Gnutella, peers propagate query messages towards the resource holders by flooding them through the network. This is, however, a costly operation since it consumes node and link resources excessively and often unnecessarily. There is no reason, for example, for a peer to receive a query message if the peer has no matching resource or is not on the path to a peer holding a matching resource. In this paper, we present a solution to this problem, which we call Route Learning, aiming to reduce query traffic in unstructured peer-to-peer networks. In Route Learning, peers try to identify the most likely neighbors through which replies can be obtained to submitted queries. in this way, a query is forwarded only to a subset of the neighbors of a peer, or it is dropped if no neighbor, likely to reply, is found. The scheme also has mechanisms to cope with variations in user submitted queries, like changes in the keywords. The scheme can also evaluate the route for a query for which it is not trained. We show through simulation results that when compared to a pure flooding based querying approach, our scheme reduces bandwidth overhead significantly without sacrificing user satisfaction. (C) 2008 Elsevier Ltd. All rights reserved

    Second chance: A hybrid approach for dynamic result caching and prefetching in search engines

    Get PDF
    Cataloged from PDF version of article.Web search engines are known to cache the results of previously issued queries. The stored results typically contain the document summaries and some data that is used to construct the final search result page returned to the user. An alternative strategy is to store in the cache only the result document IDs, which take much less space, allowing results of more queries to be cached. These two strategies lead to an interesting trade-off between the hit rate and the average query response latency. In this work, in order to exploit this trade-off, we propose a hybrid result caching strategy where a dynamic result cache is split into two sections: an HTML cache and a docID cache. Moreover, using a realistic cost model, we evaluate the performance of different result prefetching strategies for the proposed hybrid cache and the baseline HTML-only cache. Finally, we propose a machine learning approach to predict singleton queries, which occur only once in the query stream. We show that when the proposed hybrid result caching strategy is coupled with the singleton query predictor, the hit rate is further improved. © 2013 ACM

    Food prosumption technologies : A symbiotic lens for a degrowth transition

    Get PDF
    Prosumption is gaining momentum among the critical accounts of sustainable consumption that have thus far enriched the marketing discourse. Attention to prosumption is increasing whilst the degrowth movement is emerging to tackle the contradictions inherent in growth-driven, technology-fueled, and capitalist modes of sustainable production and consumption. In response to dominant critical voices that portray technology as counter to degrowth living, we propose an alternative symbiotic lens with which to reconsider the relations between technology, prosumption, and degrowth living, and assess how a degrowth transition in the context of food can be carried out at the intersection of human–nature–technology. We contribute to the critical debates on prosumption in marketing by analyzing the potentials and limits of technology-enabled food prosumption for a degrowth transition through the degrowth principles of conviviality and appropriateness. Finally, we consider the sociopolitical challenges involved in mobilizing such technologies to achieve symbiosis and propose a future research agenda.©2023 Sage Publications. The article is protected by copyright and reuse is restricted to non-commercial and no derivative uses. Users may also download and save a local copy of an article accessed in an institutional repository for the user's personal reference.fi=vertaisarvioitu|en=peerReviewed

    A search for pulsations in the HgMn star HD 45975 with CoRoT photometry and ground-based spectroscopy

    Full text link
    The existence of pulsations in HgMn stars is still being debated. To provide the first unambiguous observational detection of pulsations in this class of chemically peculiar objects, the bright star HD 45975 was monitored for nearly two months by the CoRoT satellite. Independent analyses of the light curve provides evidence of monoperiodic variations with a frequency of 0.7572 c/d and a peak-to-peak amplitude of ~2800 ppm. Multisite, ground-based spectroscopic observations overlapping the CoRoT observations show the star to be a long-period, single-lined binary. Furthermore, with the notable exception of mercury, they reveal the same periodicity as in photometry in the line moments of chemical species exhibiting strong overabundances (e.g., Mn and Y). In contrast, lines of other elements do not show significant variations. As found in other HgMn stars, the pattern of variability consists in an absorption bump moving redwards across the line profiles. We argue that the photometric and spectroscopic changes are more consistent with an interpretation in terms of rotational modulation of spots at the stellar surface. In this framework, the existence of pulsations producing photometric variations above the ~50 ppm level is unlikely in HD 45975. This provides strong constraints on the excitation/damping of pulsation modes in this HgMn star.Comment: Accepted for publication in A&A, 14 pages, 15 colour figures (revised version after language editing

    Metadata-based modeling of information resources on the web

    Get PDF
    This paper deals with the problem of modeling Web information resources using expert knowledge and personalized user information for improved Web searching capabilities. We propose a "Web information space" model, which is composed of Web-based information resources (HTML/XML [Hypertext Markup Language/Extensible Markup Language] documents on the Web), expert advice repositories (domain-expert-specified meta-data for information resources), and personalized information about users (captured as user profiles that indicates users' preferences about experts as well as users' knowledge about topics). Expert advice, the heart of the Web information space model, is specified using topics and relationships among topics (called metalinks), along the lines of the recently proposed topic maps. Topics and metalinks constitute metadata that describe the contents of the underlying HTML/XML Web resources. The metadata specification process is semiautomated, and it exploits XML DTDs (Document Type Definition) to allow domain-expert guided mapping of DTD elements to topics and metalinks. The expert advice is stored in an object-relational database management systems (DBMS). To demonstrate the practicality and usability of the proposed Web information space model, we created a prototype expert advice repository of more than one million topics/metalinks for DBLP (Database and Logic Programming) Bibliography data set. We also present a query interface that provides sophisticated querying facilities for DBLP Bibliography resources using the expert advice repository

    Timestamp-based result cache invalidation for web search engines

    Get PDF
    The result cache is a vital component for efficiency of large-scale web search engines, and maintaining the freshness of cached query results is the current research challenge. As a remedy to this problem, our work proposes a new mechanism to identify queries whose cached results are stale. The basic idea behind our mechanism is to maintain and compare generation time of query results with update times of posting lists and documents to decide on staleness of query results. The proposed technique is evaluated using a Wikipedia document collection with real update information and a real-life query log. We show that our technique has good prediction accuracy, relative to a baseline based on the time-to-live mechanism. Moreover, it is easy to implement and incurs less processing overhead on the system relative to a recently proposed, more sophisticated invalidation mechanism

    Adaptive time-to-live strategies for query result caching in web search engines

    Get PDF
    An important research problem that has recently started to receive attention is the freshness issue in search engine result caches. In the current techniques in literature, the cached search result pages are associated with a fixed time-to-live (TTL) value in order to bound the staleness of search results presented to the users, potentially as part of a more complex cache refresh or invalidation mechanism. In this paper, we propose techniques where the TTL values are set in an adaptive manner, on a per-query basis. Our results show that the proposed techniques reduce the fraction of stale results served by the cache and also decrease the fraction of redundant query evaluations on the search engine backend compared to a strategy using a fixed TTL value for all queries. © 2012 Springer-Verlag Berlin Heidelberg
    corecore