2,315 research outputs found

    Data acquisition electronics and reconstruction software for directional detection of Dark Matter with MIMAC

    Full text link
    Directional detection of galactic Dark Matter requires 3D reconstruction of low energy nuclear recoils tracks. A dedicated acquisition electronics with auto triggering feature and a real time track reconstruction software have been developed within the framework of the MIMAC project of detector. This auto-triggered acquisition electronic uses embedded processing to reduce data transfer to its useful part only, i.e. decoded coordinates of hit tracks and corresponding energy measurements. An acquisition software with on-line monitoring and 3D track reconstruction is also presented.Comment: 17 pages, 12 figure

    Using Latency to Evaluate Computer System Performance

    Get PDF
    Building high performance computer systems requires an understanding of the behaviour of systems and what makes them fast or slow. In addition to our file system performance analysis, we have a number of projects in measuring, evaluating, and understanding system performances. The conventional methodology for system performance measurement, which relies primarily on throughput-sensitive benchmarks and throughput metrics, has major limitations when analyzing the behaviour and performance of interactive workloads. The increasingly interactive character of personal computing demands new ways of measuring and analyzing system performance. In this paper, we present a combination of measurement techniques and benchmark methodologies that address these problems. We use some simple methods for making direct and precise measurements of event handling latency in the context of a realistic interactive application. We analyze how results from such measurements can be used to understand the detailed behaviour of latency-critical events. We demonstrate our techniques in an analysis of the performance of two releases of Windows 9x and Windows XP Professional. Our experience indicates that latency can be measured for a class of interactive workloads, providing a substantial improvement in the accuracy and detail of performance information over measurements based strictly on throughput

    Admission Control and Scheduling for High-Performance WWW Servers

    Full text link
    In this paper we examine a number of admission control and scheduling protocols for high-performance web servers based on a 2-phase policy for serving HTTP requests. The first "registration" phase involves establishing the TCP connection for the HTTP request and parsing/interpreting its arguments, whereas the second "service" phase involves the service/transmission of data in response to the HTTP request. By introducing a delay between these two phases, we show that the performance of a web server could be potentially improved through the adoption of a number of scheduling policies that optimize the utilization of various system components (e.g. memory cache and I/O). In addition, to its premise for improving the performance of a single web server, the delineation between the registration and service phases of an HTTP request may be useful for load balancing purposes on clusters of web servers. We are investigating the use of such a mechanism as part of the Commonwealth testbed being developed at Boston University

    A Big Data Analyzer for Large Trace Logs

    Full text link
    Current generation of Internet-based services are typically hosted on large data centers that take the form of warehouse-size structures housing tens of thousands of servers. Continued availability of a modern data center is the result of a complex orchestration among many internal and external actors including computing hardware, multiple layers of intricate software, networking and storage devices, electrical power and cooling plants. During the course of their operation, many of these components produce large amounts of data in the form of event and error logs that are essential not only for identifying and resolving problems but also for improving data center efficiency and management. Most of these activities would benefit significantly from data analytics techniques to exploit hidden statistical patterns and correlations that may be present in the data. The sheer volume of data to be analyzed makes uncovering these correlations and patterns a challenging task. This paper presents BiDAl, a prototype Java tool for log-data analysis that incorporates several Big Data technologies in order to simplify the task of extracting information from data traces produced by large clusters and server farms. BiDAl provides the user with several analysis languages (SQL, R and Hadoop MapReduce) and storage backends (HDFS and SQLite) that can be freely mixed and matched so that a custom tool for a specific task can be easily constructed. BiDAl has a modular architecture so that it can be extended with other backends and analysis languages in the future. In this paper we present the design of BiDAl and describe our experience using it to analyze publicly-available traces from Google data clusters, with the goal of building a realistic model of a complex data center.Comment: 26 pages, 10 figure

    A Survey on Delay-Aware Resource Control for Wireless Systems --- Large Deviation Theory, Stochastic Lyapunov Drift and Distributed Stochastic Learning

    Full text link
    In this tutorial paper, a comprehensive survey is given on several major systematic approaches in dealing with delay-aware control problems, namely the equivalent rate constraint approach, the Lyapunov stability drift approach and the approximate Markov Decision Process (MDP) approach using stochastic learning. These approaches essentially embrace most of the existing literature regarding delay-aware resource control in wireless systems. They have their relative pros and cons in terms of performance, complexity and implementation issues. For each of the approaches, the problem setup, the general solution and the design methodology are discussed. Applications of these approaches to delay-aware resource allocation are illustrated with examples in single-hop wireless networks. Furthermore, recent results regarding delay-aware multi-hop routing designs in general multi-hop networks are elaborated. Finally, the delay performance of the various approaches are compared through simulations using an example of the uplink OFDMA systems.Comment: 58 pages, 8 figures; IEEE Transactions on Information Theory, 201

    Performance Modelling and Measurements of TCP Transfer Throughput in 802.11based WLANs

    Get PDF
    The growing popularity of the 802.11 standard for building local wireless networks has generated an extensive literature on the performance modelling of its MAC protocol. However, most of the available studies focus on the throughput analysis in saturation conditions, while very little has been done on investigating the interactions between the 802.11 MAC protocol and closed-loop transport protocols such as TCP. This paper addresses this issue by developing an analytical model to compute the stationary probability distribution of the number of backlogged nodes in a WLAN in the presence of persistent TCP-controlled download and upload data transfers. By embedding the network backlog distribution in the MAC protocol modelling, we can precisely estimate the throughput performance of TCP connections. A large set of experiments conducted in a real network validates the model correctness for a wide range of configurations. A particular emphasis is devoted to investigate and explain the TCP fairness characteristics. Our analytical model and the supporting experimental outcomes demonstrate that using default settings for the capacity of devices\u27 output queues provides a fair allocation of channel bandwidth to the TCP connections, independently of the number of downstream and upstream flows. Furthermore, we show that the TCP total throughput does not degrade by increasing the number of wireless stations
    corecore