1,197 research outputs found

    Dynamic Server Allocation at Parallel Queues

    Get PDF
    We explore whether dynamically reassigning servers to parallel queues in response to queue imbalances can reduce average waiting time in those queues. We use approximate dynamic programming methods to determine when servers should be switched, and we compare the performance of such dynamic allocations to that of a pre-scheduled deterministic allocation. Testing our method on both synthetic data and data from airport security checkpoints at Boston Logan International Airport, we find that in situations where the uncertainty in customer arrival rates is significant, dynamically reallocating servers can substantially reduce waiting time. Moreover, we find that intuitive switching strategies that are optimal for queues with homogeneous entry rates are not optimal in this setting. Keywords: control of queues, fluid queues, approximate dynamic programming, dynamic server allocation, workforce management

    Pairing Software-Managed Caching with Decay Techniques to Balance Reliability and Static Power in Next-Generation Caches

    Get PDF
    Since array structures represent well over half the area and transistors on-chip, maintaining their ability to scale is crucial for overall technology scaling. Shrinking transistor sizes are resulting in increased probabilities of single events causing single- and multi-bit upsets which require adoption of more complex and power hungry error detection and correction codes (ECC) in hardware. At the same time, SRAM leakage energy is increasing partly due to technology trends and partly due to the increasing number of transistors present. This paper proposes and evaluates methods of reducing the static power requirements of caches, while also maintaining high reliability. In particular, we propose methods of applying reduced ECC techniques to data that has been identified (by programmer or compiler) as error-tolerant. This segregation, in turn, makes both the default data and the error-tolerant data more amenable to decay-based techniques for leakage control. We examine the potential of this split memory hierarchy along several dimensions. In particular, we consider the power and reliability issues inherent in the approach. Overall, we show that our approach allows the ECC requirements of future applications and caches to be met while also reducing leakage energy

    QDB: From Quantum Algorithms Towards Correct Quantum Programs

    Get PDF
    With the advent of small-scale prototype quantum computers, researchers can now code and run quantum algorithms that were previously proposed but not fully implemented. In support of this growing interest in quantum computing experimentation, programmers need new tools and techniques to write and debug QC code. In this work, we implement a range of QC algorithms and programs in order to discover what types of bugs occur and what defenses against those bugs are possible in QC programs. We conduct our study by running small-sized QC programs in QC simulators in order to replicate published results in QC implementations. Where possible, we cross-validate results from programs written in different QC languages for the same problems and inputs. Drawing on this experience, we provide a taxonomy for QC bugs, and we propose QC language features that would aid in writing correct code

    Noise-Adaptive Compiler Mappings for Noisy Intermediate-Scale Quantum Computers

    Full text link
    A massive gap exists between current quantum computing (QC) prototypes, and the size and scale required for many proposed QC algorithms. Current QC implementations are prone to noise and variability which affect their reliability, and yet with less than 80 quantum bits (qubits) total, they are too resource-constrained to implement error correction. The term Noisy Intermediate-Scale Quantum (NISQ) refers to these current and near-term systems of 1000 qubits or less. Given NISQ's severe resource constraints, low reliability, and high variability in physical characteristics such as coherence time or error rates, it is of pressing importance to map computations onto them in ways that use resources efficiently and maximize the likelihood of successful runs. This paper proposes and evaluates backend compiler approaches to map and optimize high-level QC programs to execute with high reliability on NISQ systems with diverse hardware characteristics. Our techniques all start from an LLVM intermediate representation of the quantum program (such as would be generated from high-level QC languages like Scaffold) and generate QC executables runnable on the IBM Q public QC machine. We then use this framework to implement and evaluate several optimal and heuristic mapping methods. These methods vary in how they account for the availability of dynamic machine calibration data, the relative importance of various noise parameters, the different possible routing strategies, and the relative importance of compile-time scalability versus runtime success. Using real-system measurements, we show that fine grained spatial and temporal variations in hardware parameters can be exploited to obtain an average 2.92.9x (and up to 1818x) improvement in program success rate over the industry standard IBM Qiskit compiler.Comment: To appear in ASPLOS'1

    How Effective Is Security Screening of Airline Passengers?

    Get PDF
    With a simple mathematical model, we explored the antiterrorist effectiveness of airport passenger prescreening systems. Supporters of these systems often emphasize the need to identify the most suspicious passengers, but they ignore the point that such identification does little good unless dangerous items can actually be detected. Critics often focus on terrorists\u27 ability to probe the system and thereby thwart it, but ignore the possibility that the very act of probing can deter attempts at sabotage that would have succeeded. Using the model to make some preliminary assessments about security policy, we find that an improved baseline level of screening for all passengers might lower the likelihood of attack more than would improved profiling of high-risk passengers

    Magic-State Functional Units: Mapping and Scheduling Multi-Level Distillation Circuits for Fault-Tolerant Quantum Architectures

    Full text link
    Quantum computers have recently made great strides and are on a long-term path towards useful fault-tolerant computation. A dominant overhead in fault-tolerant quantum computation is the production of high-fidelity encoded qubits, called magic states, which enable reliable error-corrected computation. We present the first detailed designs of hardware functional units that implement space-time optimized magic-state factories for surface code error-corrected machines. Interactions among distant qubits require surface code braids (physical pathways on chip) which must be routed. Magic-state factories are circuits comprised of a complex set of braids that is more difficult to route than quantum circuits considered in previous work [1]. This paper explores the impact of scheduling techniques, such as gate reordering and qubit renaming, and we propose two novel mapping techniques: braid repulsion and dipole moment braid rotation. We combine these techniques with graph partitioning and community detection algorithms, and further introduce a stitching algorithm for mapping subgraphs onto a physical machine. Our results show a factor of 5.64 reduction in space-time volume compared to the best-known previous designs for magic-state factories.Comment: 13 pages, 10 figure

    Detecting Covert Members of Terrorist Networks

    Get PDF
    Terrorism threatens both international peace and security and is a national concern. It is believed that terrorist organizations rely heavily on a few key leaders and that destroying such an organization\u27s leadership is essential to reducing its influence. Martonosi et al. (2011) argues that increasing the amount of communication through a key leader increases the likelihood of detection. If we model a covert organization as a social network where edges represent communication between members, we want to determine the subset of members to remove that maximizes the amount of communication through the key leader. A mixed-integer linear program representing this problem is presented as well as a decomposition for this optimization problem. As these approaches prove impractical for larger graphs, often running out of memory, the last section focuses on structural characteristics of vertices and subsets that increase communication. Future work should develop these structural properties as well as heuristics for solving this problem
    • …
    corecore