7 research outputs found

    An adaptive directed query dissemination scheme for wireless sensor networks

    Get PDF
    This paper describes a directed query dissemination scheme, DirQ that routes queries to the appropriate source nodes based on both constant and dynamic-valued attributes such as sensor types and sensor values. Unlike certain other query dissemination schemes, location information is not essential for the operation of DirQ. DirQ uses only locally available information in order to route queries accurately. Nodes running DirQ are able to adapt autonomously to changes in network topology due to certain cross-layer features that allow it to exchange information with the underlying MAC protocol. DirQ allows nodes to autonomously control the rate of sending update messages in order to keep the routing information updated. The rate of sending updates is dependent on both the number of queries injected into the network and the rate of variation of the measured physical parameter. Our results show that DirQ spends between 45% and 55% the cost of flooding

    Document distribution algorithm for load balancing on an extensible Web server architecture

    Get PDF
    Access latency and load balancing are the two main issues in the design of clustered Web server architecture for achieving high performance. We propose a novel document distribution algorithm for load balancing on a cluster of distributed Web servers. We group Web pages that are likely to be accessed during a request session into a migrating unit, which is used as the basic unit of document placement. A modified binning algorithm is developed to distribute the migrating units among the Web servers to fulfil the load balancing. We also present a redirection mechanism, which makes use of a migrating unit's property, to reduce the cost of request redirections. The distribution of Web documents would be recomputed periodically to adapt to the changes in client request patterns and system configuration. Simulation results show that our solution can reduce the amount of request redirection and document migration, and it can distribute workload properly among Web servers.published_or_final_versio

    A taxonomy of web prediction algorithms

    Full text link
    Web prefetching techniques are an attractive solution to reduce the user-perceived latency. These techniques are driven by a prediction engine or algorithm that guesses following actions of web users. A large amount of prediction algorithms has been proposed since the first prefetching approach was published, although it is only over the last two or three years when they have begun to be successfully implemented in commercial products. These algorithms can be implemented in any element of the web architecture and can use a wide variety of information as input. This affects their structure, data system, computational resources and accuracy. The knowledge of the input information and the understanding of how it can be handled to make predictions can help to improve the design of current prediction engines, and consequently prefetching techniques. This paper analyzes fifty of the most relevant algorithms proposed along 15 years of prefetching research and proposes a taxonomy where the algorithms are classified according to the input data they use. For each group, the main advantages and shortcomings are highlighted. © 2012 Elsevier Ltd. All rights reserved.This work has been partially supported by Spanish Ministry of Science and Innovation under Grant TIN2009-08201, Generalitat Valenciana under Grant GV/2011/002 and Universitat Politecnica de Valencia under Grant PAID-06-10/2424.Domenech, J.; De La Ossa Perez, BA.; Sahuquillo Borrás, J.; Gil Salinas, JA.; Pont Sanjuan, A. (2012). A taxonomy of web prediction algorithms. Expert Systems with Applications. 39(9):8496-8502. https://doi.org/10.1016/j.eswa.2012.01.140S8496850239

    Evaluation, Analysis and adaptation of web prefetching techniques in current web

    Full text link
    Abstract This dissertation is focused on the study of the prefetching technique applied to the World Wide Web. This technique lies in processing (e.g., downloading) a Web request before the user actually makes it. By doing so, the waiting time perceived by the user can be reduced, which is the main goal of the Web prefetching techniques. The study of the state of the art about Web prefetching showed the heterogeneity that exists in its performance evaluation. This heterogeneity is mainly focused on four issues: i) there was no open framework to simulate and evaluate the already proposed prefetching techniques; ii) no uniform selection of the performance indexes to be maximized, or even their definition; iii) no comparative studies of prediction algorithms taking into account the costs and benefits of web prefetching at the same time; and iv) the evaluation of techniques under very different or few significant workloads. During the research work, we have contributed to homogenizing the evaluation of prefetching performance by developing an open simulation framework that reproduces in detail all the aspects that impact on prefetching performance. In addition, prefetching performance metrics have been analyzed in order to clarify their definition and detect the most meaningful from the user's point of view. We also proposed an evaluation methodology to consider the cost and the benefit of prefetching at the same time. Finally, the importance of using current workloads to evaluate prefetching techniques has been highlighted; otherwise wrong conclusions could be achieved. The potential benefits of each web prefetching architecture were analyzed, finding that collaborative predictors could reduce almost all the latency perceived by users. The first step to develop a collaborative predictor is to make predictions at the server, so this thesis is focused on an architecture with a server-located predictor. The environment conditions that can be found in the web are alsDoménech I De Soria, J. (2007). Evaluation, Analysis and adaptation of web prefetching techniques in current web [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/1841Palanci

    Efficient Algorithms for Predicting Requests to Web Servers

    No full text
    this paper The entries needed for our study were the IP address of the client (proxy), the accessed resource, and the time of the access. We converted the first two into hashed integers, the ASCII time representation into an integer form (number of seconds since Dec 31 1969). To generate volumes and compute counters, we used stable sorting on the hashed forms of source IP addresses and processed the logs efficiently. We now look at the software we used to process and analyze the server log

    Efficient Algorithms for Predicting Requests to Web Servers

    No full text
    Internet traffic has grown significantly with the popularity of the Web. Consequently user perceived latency in retrieving web pages has increased. Caching and prefetching at the client side, aided by hints from the server, are attempts at solving this problem. We suggest techniques to group resources that are likely to be accessed together into volumes, which are used to generate hints tailored to individual applications, such as prefetching, cache replacement, and cache validation. We discuss theoretical aspects of optimal volume construction, and develop efficient heuristics. Tunable parameters allow our algorithms to predict as many accesses as possible while reducing false predictions and limiting the size of hints. We analyze a collection of large server logs, extracting access patterns to construct and evaluate volumes. We examine sampling techniques to process only portions of the server logs while constructing equally good volumes. We show that it is possible to predict reques..

    Modeling and acceleration of content delivery in world wide web

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH
    corecore