14,363 research outputs found

    Energy Scaling of Minimum-Bias Tunes

    Get PDF
    We propose that the flexibility offered by modern event-generator tuning tools allows for more than just obtaining "best fits" to a collection of data. In particular, we argue that the universality of the underlying physics model can be tested by performing several, mutually independent, optimizations of the generator parameters in different physical regions. For regions in which these optimizations return similar and self-consistent parameter values, the model can be considered universal. Deviations from this behavior can be associated with a breakdown of the modeling, with the nature of the deviations giving clues as to the nature of the breakdown. We apply this procedure to study the energy scaling of a class of minimum-bias models based on multiple parton interactions (MPI) and pT-ordered showers, implemented in the Pythia 6.4 generator. We find that a parameter controlling the strength of color reconnections in the final state is the most important source of non-universality in this model.Comment: 17 pages, 3 figures, 4 table

    Domestic energy management methodology for optimizing efficiency in Smart Grids

    Get PDF
    Increasing energy prices and the greenhouse effect lead to more awareness of energy efficiency of electricity supply. During the last years, a lot of domestic technologies have been developed to improve this efficiency. These technologies on their own already improve the efficiency, but more can be gained by a combined management. Multiple optimization objectives can be used to improve the efficiency, from peak shaving and Virtual Power Plant (VPP) to adapting to fluctuating generation of wind turbines. In this paper a generic management methology is proposed applicable for most domestic technologies, scenarios and optimization objectives. Both local scale optimization objectives (a single house) and global scale optimization objectives (multiple houses) can be used. Simulations of different scenarios show that both local and global objectives can be reached

    Multicore-aware parallel temporal blocking of stencil codes for shared and distributed memory

    Full text link
    New algorithms and optimization techniques are needed to balance the accelerating trend towards bandwidth-starved multicore chips. It is well known that the performance of stencil codes can be improved by temporal blocking, lessening the pressure on the memory interface. We introduce a new pipelined approach that makes explicit use of shared caches in multicore environments and minimizes synchronization and boundary overhead. For clusters of shared-memory nodes we demonstrate how temporal blocking can be employed successfully in a hybrid shared/distributed-memory environment.Comment: 9 pages, 6 figure

    Comparing the Overhead of Topological and Concatenated Quantum Error Correction

    Full text link
    This work compares the overhead of quantum error correction with concatenated and topological quantum error-correcting codes. To perform a numerical analysis, we use the Quantum Resource Estimator Toolbox (QuRE) that we recently developed. We use QuRE to estimate the number of qubits, quantum gates, and amount of time needed to factor a 1024-bit number on several candidate quantum technologies that differ in their clock speed and reliability. We make several interesting observations. First, topological quantum error correction requires fewer resources when physical gate error rates are high, white concatenated codes have smaller overhead for physical gate error rates below approximately 10E-7. Consequently, we show that different error-correcting codes should be chosen for two of the studied physical quantum technologies - ion traps and superconducting qubits. Second, we observe that the composition of the elementary gate types occurring in a typical logical circuit, a fault-tolerant circuit protected by the surface code, and a fault-tolerant circuit protected by a concatenated code all differ. This also suggests that choosing the most appropriate error correction technique depends on the ability of the future technology to perform specific gates efficiently

    Tupleware: Redefining Modern Analytics

    Full text link
    There is a fundamental discrepancy between the targeted and actual users of current analytics frameworks. Most systems are designed for the data and infrastructure of the Googles and Facebooks of the world---petabytes of data distributed across large cloud deployments consisting of thousands of cheap commodity machines. Yet, the vast majority of users operate clusters ranging from a few to a few dozen nodes, analyze relatively small datasets of up to a few terabytes, and perform primarily compute-intensive operations. Targeting these users fundamentally changes the way we should build analytics systems. This paper describes the design of Tupleware, a new system specifically aimed at the challenges faced by the typical user. Tupleware's architecture brings together ideas from the database, compiler, and programming languages communities to create a powerful end-to-end solution for data analysis. We propose novel techniques that consider the data, computations, and hardware together to achieve maximum performance on a case-by-case basis. Our experimental evaluation quantifies the impact of our novel techniques and shows orders of magnitude performance improvement over alternative systems
    corecore