10 research outputs found

    Stationary Distribution of a Generalized LRU-MRU Content Cache

    Full text link
    Many different caching mechanisms have been previously proposed, exploring different insertion and eviction policies and their performance individually and as part of caching networks. We obtain a novel closed-form stationary invariant distribution for a generalization of LRU and MRU caching nodes under a reference Markov model. Numerical comparisons are made with an "Incremental Rank Progress" (IRP a.k.a. CLIMB) and random eviction (a.k.a. random replacement) methods under a steady-state Zipf popularity distribution. The range of cache hit probabilities is smaller under MRU and larger under IRP compared to LRU. We conclude with the invariant distribution for a special case of a random-eviction caching tree-network and associated discussion

    Research in Mobile Database Query Optimization and Processing

    Get PDF

    Temperature, energy and performance: addressing embedded system challenges through fast cache simulation

    Full text link
    Temperature, energy and performance are essential design considerations during the conception of modern digital systems. The work presented in this thesis focusses on three aspects that can be used to overcome these limitations. First an evaluation of the suitability of the dynamic application adaptation method is researched with the aim of using it to control the temperature of a Field Programmable Gate Array (FPGA) device. Despite the use of an extremely adaptive custom JPEG encoder, it was determined that application adaptation alone is ineffective in an FPGA for thermal management. Next, a study is performed which aims to assess which components are principally responsible for the rise in temperatures in FPGAs. It was found that the external memory interface is a significant heat-source in FPGA-based embedded systems, and that device temperature correlates with CPU cache miss rate. The third and main aspect covered in this dissertation is the speeding up of CPU cache simulation. Single pass cache simulation is a tool that can be employed at design time to select a cache yielding acceptable temperature, system performance and energy consumption. Three Multiple cAche Simulators in Hardware (MASH) or in Software (MASS) are proposed for three cache replacement policies: MASH{lru} for the Least Recently Used (LRU) cache algorithm, MASH{fifo} for First In First Out (FIFO) and MASS{plrut} for Pseudo Least Recently Used tree (PLRUt). The former two are novel in that they are implemented in hardware and are respectively 53x and 11.10x faster than software counterparts. The PLRUt simulator presents for the first time an optimised hash table-based algorithm yielding a speedup of 1.93x over an unoptimised solution. All cache simulators employ cache properties specific to their replacement policies to improve simulator characteristics. Additionally, it is shown that the hardware (or MASH) simulators can be implemented in-system alongside an embedded system, allowing for the direct trace extraction and cache simulation from within an FPGA. Using in-system simulation, large speedups can be achieved as trace generation and multiple cache simulation happen at the same time at high frequencies

    Analysis, Modeling, and Algorithms for Scalable Web Crawling

    Get PDF
    This dissertation presents a modeling framework for the intermediate data generated by external-memory sorting algorithms (e.g., merge sort, bucket sort, hash sort, replacement selection) that are well-known, yet without accurate models of produced data volume. The motivation comes from the IRLbot crawl experience in June 2007, where a collection of scalable and high-performance external sorting methods are used to handle such problems as URL uniqueness checking, real-time frontier ranking, budget allocation, spam avoidance, all being monumental tasks, especially when limited to the resources of a single-machine. We discuss this crawl experience in detail, use novel algorithms to collect data from the crawl image, and then advance to a broader problem – sorting arbitrarily large-scale data using limited resources and accurately capturing the required cost (e.g., time and disk usage). To solve these problems, we present an accurate model of uniqueness probability the probability to encounter previous unseen data and use that to analyze the amount of intermediate data generated the above-mentioned sorting methods. We also demonstrate how the intermediate data volume and runtime vary based on the input properties (e.g., frequency distribution), hardware configuration (e.g., main memory size, CPU and disk speed) and the choice of sorting method, and that our proposed models accurately capture such variation. Furthermore, we propose a novel hash-based method for replacement selection sort and its model in case of duplicate data, where existing literature is limited to random or mostly-unique data. Note that the classic replacement selection method has the ability to increase the length of sorted runs and reduce their number, both directly benefiting the merge step of external sorting and . But because of a priority queue-assisted sort operation that is inherently slow, the application of replacement selection was limited. Our hash-based design solves this problem by making the sort phase significantly faster compared to existing methods, making this method a preferred choice. The presented models also enable exact analysis of Least-Recently-Used (LRU) and Random Replacement caches (i.e., their hit rate) that are used as part of the algorithms presented here. These cache models are more accurate than the ones in existing literature, since the existing ones mostly assume infinite stream of data, while our models work accurately on finite streams (e.g., sampled web graphs, click stream) as well. In addition, we present accurate models for various crawl characteristics of random graphs, which can forecast a number of aspects of crawl experience based on the graph properties (e.g., degree distribution). All these models are presented under a unified umbrella to analyze a set of large-scale information processing algorithms that are streamlined for high performance and scalability

    Exploiting Fine-Grain Concurrency Analytical Insights in Superscalar Processor Design

    Get PDF
    This dissertation develops analytical models to provide insight into various design issues associated with superscalar-type processors, i.e., the processors capable of executing multiple instructions per cycle. A survey of the existing machines and literature has been completed with a proposed classification of various approaches for exploiting fine-grain concurrency. Optimization of a single pipeline is discussed based on an analytical model. The model-predicted performance curves are found to be in close proximity to published results using simulation techniques. A model is also developed for comparing different branch strategies for single-pipeline processors in terms of their effectiveness in reducing branch delay. The additional instruction fetch traffic generated by certain branch strategies is also studied and is shown to be a useful criterion for choosing between equally well performing strategies. Next, processors with multiple pipelines are modelled to study the tradeoffs associated with deeper pipelines versus multiple pipelines. The model developed can reveal the cause of performance bottleneck: insufficient resources to exploit discovered parallelism, insufficient instruction stream parallelism, or insufficient scope of concurrency detection. The cost associated with speculative (i.e., beyond basic block) execution is examined via probability distributions that characterize the inherent parallelism in the instruction stream. The throughput prediction of the analytic model is shown, using a variety of benchmarks, to be close to the measured static throughput of the compiler output, under resource and scope constraints. Further experiments provide misprediction delay estimates for these benchmarks under scope constraints, assuming beyond-basic-block, out-of-order execution and run-time scheduling. These results were derived using traces generated by the Multiflow TRACE SCHEDULING™(*) compacting C and FORTRAN 77 compilers. A simplified extension to the model to include multiprocessors is also proposed. The extended model is used to analyze combined systems, such as superpipelined multiprocessors and superscalar multiprocessors, both with shared memory. It is shown that the number of pipelines (or processors) at which the maximum throughput is obtained is increasingly sensitive to the ratio of memory access time to network access delay, as memory access time increases. Further, as a function of inter-iteration dependency distance, optimum throughput is shown to vary nonlinearly, whereas the corresponding Optimum number of processors varies linearly. The predictions from the analytical model agree with published results based on simulations. (*)TRACE SCHEDULING is a trademark of Multiflow Computer, Inc

    Mobile Ad Hoc Networks

    Get PDF
    Guiding readers through the basics of these rapidly emerging networks to more advanced concepts and future expectations, Mobile Ad hoc Networks: Current Status and Future Trends identifies and examines the most pressing research issues in Mobile Ad hoc Networks (MANETs). Containing the contributions of leading researchers, industry professionals, and academics, this forward-looking reference provides an authoritative perspective of the state of the art in MANETs. The book includes surveys of recent publications that investigate key areas of interest such as limited resources and the mobility of mobile nodes. It considers routing, multicast, energy, security, channel assignment, and ensuring quality of service. Also suitable as a text for graduate students, the book is organized into three sections: Fundamentals of MANET Modeling and Simulation—Describes how MANETs operate and perform through simulations and models Communication Protocols of MANETs—Presents cutting-edge research on key issues, including MAC layer issues and routing in high mobility Future Networks Inspired By MANETs—Tackles open research issues and emerging trends Illustrating the role MANETs are likely to play in future networks, this book supplies the foundation and insight you will need to make your own contributions to the field. It includes coverage of routing protocols, modeling and simulations tools, intelligent optimization techniques to multicriteria routing, security issues in FHAMIPv6, connecting moving smart objects to the Internet, underwater sensor networks, wireless mesh network architecture and protocols, adaptive routing provision using Bayesian inference, and adaptive flow control in transport layer using genetic algorithms

    Mobile Ad Hoc Networks

    Get PDF
    Guiding readers through the basics of these rapidly emerging networks to more advanced concepts and future expectations, Mobile Ad hoc Networks: Current Status and Future Trends identifies and examines the most pressing research issues in Mobile Ad hoc Networks (MANETs). Containing the contributions of leading researchers, industry professionals, and academics, this forward-looking reference provides an authoritative perspective of the state of the art in MANETs. The book includes surveys of recent publications that investigate key areas of interest such as limited resources and the mobility of mobile nodes. It considers routing, multicast, energy, security, channel assignment, and ensuring quality of service. Also suitable as a text for graduate students, the book is organized into three sections: Fundamentals of MANET Modeling and Simulation—Describes how MANETs operate and perform through simulations and models Communication Protocols of MANETs—Presents cutting-edge research on key issues, including MAC layer issues and routing in high mobility Future Networks Inspired By MANETs—Tackles open research issues and emerging trends Illustrating the role MANETs are likely to play in future networks, this book supplies the foundation and insight you will need to make your own contributions to the field. It includes coverage of routing protocols, modeling and simulations tools, intelligent optimization techniques to multicriteria routing, security issues in FHAMIPv6, connecting moving smart objects to the Internet, underwater sensor networks, wireless mesh network architecture and protocols, adaptive routing provision using Bayesian inference, and adaptive flow control in transport layer using genetic algorithms

    Geographic information extraction from texts

    Get PDF
    A large volume of unstructured texts, containing valuable geographic information, is available online. This information – provided implicitly or explicitly – is useful not only for scientific studies (e.g., spatial humanities) but also for many practical applications (e.g., geographic information retrieval). Although large progress has been achieved in geographic information extraction from texts, there are still unsolved challenges and issues, ranging from methods, systems, and data, to applications and privacy. Therefore, this workshop will provide a timely opportunity to discuss the recent advances, new ideas, and concepts but also identify research gaps in geographic information extraction

    Annual Report of the Board of Regents of the Smithsonian Institution, showing the operations, expenditures, and condition of the Institution for the year ending June 30, 1888.

    Get PDF
    Annual Report of the Smithsonian Institution. 1 July. HMD 142 (pts. 1 and 2) , 50-2, v14-15, 177 8p. [2668-2669] Research related to the North American Indian
    corecore