12,924 research outputs found

    The Gross-Saccoman Conjecture is True

    Get PDF
    Consider a graph with perfect nodes but independent edge failures with identical probability ρ. The reliability is the connectedness probability of the random graph. A graph with n nodes and e edges is uniformly optimally reliable (UOR) if it has the greatest reliability among all graphs with the same number of nodes and edges, for all values of ρ. In 1997, Gross and Saccoman proved that the simple UOR graphs for e = n, e = n + 1 and e = n + 2 are also optimal when the classes are extended to include multigraphs [6]. The authors conjectured that the UOR simple graphs for e = n + 3 are optimal in multigraphs as well. A proof of the Gross-Saccoman conjecture is introduced.Agencia Nacional de Investigación e Innovació

    Magic-State Functional Units: Mapping and Scheduling Multi-Level Distillation Circuits for Fault-Tolerant Quantum Architectures

    Full text link
    Quantum computers have recently made great strides and are on a long-term path towards useful fault-tolerant computation. A dominant overhead in fault-tolerant quantum computation is the production of high-fidelity encoded qubits, called magic states, which enable reliable error-corrected computation. We present the first detailed designs of hardware functional units that implement space-time optimized magic-state factories for surface code error-corrected machines. Interactions among distant qubits require surface code braids (physical pathways on chip) which must be routed. Magic-state factories are circuits comprised of a complex set of braids that is more difficult to route than quantum circuits considered in previous work [1]. This paper explores the impact of scheduling techniques, such as gate reordering and qubit renaming, and we propose two novel mapping techniques: braid repulsion and dipole moment braid rotation. We combine these techniques with graph partitioning and community detection algorithms, and further introduce a stitching algorithm for mapping subgraphs onto a physical machine. Our results show a factor of 5.64 reduction in space-time volume compared to the best-known previous designs for magic-state factories.Comment: 13 pages, 10 figure

    Fault-Tolerant, but Paradoxical Path-Finding in Physical and Conceptual Systems

    Full text link
    We report our initial investigations into reliability and path-finding based models and propose future areas of interest. Inspired by broken sidewalks during on-campus construction projects, we develop two models for navigating this "unreliable network." These are based on a concept of "accumulating risk" backward from the destination, and both operate on directed acyclic graphs with a probability of failure associated with each edge. The first serves to introduce and has faults addressed by the second, more conservative model. Next, we show a paradox when these models are used to construct polynomials on conceptual networks, such as design processes and software development life cycles. When the risk of a network increases uniformly, the most reliable path changes from wider and longer to shorter and narrower. If we let professional inexperience--such as with entry level cooks and software developers--represent probability of edge failure, does this change in path imply that the novice should follow instructions with fewer "back-up" plans, yet those with alternative routes should be followed by the expert?Comment: 8 page

    Adaptive multiscale detection of filamentary structures in a background of uniform random points

    Full text link
    We are given a set of nn points that might be uniformly distributed in the unit square [0,1]2[0,1]^2. We wish to test whether the set, although mostly consisting of uniformly scattered points, also contains a small fraction of points sampled from some (a priori unknown) curve with CαC^{\alpha}-norm bounded by β\beta. An asymptotic detection threshold exists in this problem; for a constant T(α,β)>0T_-(\alpha,\beta)>0, if the number of points sampled from the curve is smaller than T(α,β)n1/(1+α)T_-(\alpha,\beta)n^{1/(1+\alpha)}, reliable detection is not possible for large nn. We describe a multiscale significant-runs algorithm that can reliably detect concentration of data near a smooth curve, without knowing the smoothness information α\alpha or β\beta in advance, provided that the number of points on the curve exceeds T(α,β)n1/(1+α)T_*(\alpha,\beta)n^{1/(1+\alpha)}. This algorithm therefore has an optimal detection threshold, up to a factor T/TT_*/T_-. At the heart of our approach is an analysis of the data by counting membership in multiscale multianisotropic strips. The strips will have area 2/n2/n and exhibit a variety of lengths, orientations and anisotropies. The strips are partitioned into anisotropy classes; each class is organized as a directed graph whose vertices all are strips of the same anisotropy and whose edges link such strips to their ``good continuations.'' The point-cloud data are reduced to counts that measure membership in strips. Each anisotropy graph is reduced to a subgraph that consist of strips with significant counts. The algorithm rejects H0\mathbf{H}_0 whenever some such subgraph contains a path that connects many consecutive significant counts.Comment: Published at http://dx.doi.org/10.1214/009053605000000787 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Decentralized Erasure Codes for Distributed Networked Storage

    Full text link
    We consider the problem of constructing an erasure code for storage over a network when the data sources are distributed. Specifically, we assume that there are n storage nodes with limited memory and k<n sources generating the data. We want a data collector, who can appear anywhere in the network, to query any k storage nodes and be able to retrieve the data. We introduce Decentralized Erasure Codes, which are linear codes with a specific randomized structure inspired by network coding on random bipartite graphs. We show that decentralized erasure codes are optimally sparse, and lead to reduced communication, storage and computation cost over random linear coding.Comment: to appear in IEEE Transactions on Information Theory, Special Issue: Networking and Information Theor
    corecore