215,648 research outputs found

    Complexity, parallel computation and statistical physics

    Full text link
    The intuition that a long history is required for the emergence of complexity in natural systems is formalized using the notion of depth. The depth of a system is defined in terms of the number of parallel computational steps needed to simulate it. Depth provides an objective, irreducible measure of history applicable to systems of the kind studied in statistical physics. It is argued that physical complexity cannot occur in the absence of substantial depth and that depth is a useful proxy for physical complexity. The ideas are illustrated for a variety of systems in statistical physics.Comment: 21 pages, 7 figure

    Introduction to Quantum Information Processing

    Full text link
    As a result of the capabilities of quantum information, the science of quantum information processing is now a prospering, interdisciplinary field focused on better understanding the possibilities and limitations of the underlying theory, on developing new applications of quantum information and on physically realizing controllable quantum devices. The purpose of this primer is to provide an elementary introduction to quantum information processing, and then to briefly explain how we hope to exploit the advantages of quantum information. These two sections can be read independently. For reference, we have included a glossary of the main terms of quantum information.Comment: 48 pages, to appear in LA Science. Hyperlinked PDF at http://www.c3.lanl.gov/~knill/qip/prhtml/prpdf.pdf, HTML at http://www.c3.lanl.gov/~knill/qip/prhtm

    Concrete resource analysis of the quantum linear system algorithm used to compute the electromagnetic scattering cross section of a 2D target

    Get PDF
    We provide a detailed estimate for the logical resource requirements of the quantum linear system algorithm (QLSA) [Phys. Rev. Lett. 103, 150502 (2009)] including the recently described elaborations [Phys. Rev. Lett. 110, 250504 (2013)]. Our resource estimates are based on the standard quantum-circuit model of quantum computation; they comprise circuit width, circuit depth, the number of qubits and ancilla qubits employed, and the overall number of elementary quantum gate operations as well as more specific gate counts for each elementary fault-tolerant gate from the standard set {X, Y, Z, H, S, T, CNOT}. To perform these estimates, we used an approach that combines manual analysis with automated estimates generated via the Quipper quantum programming language and compiler. Our estimates pertain to the example problem size N=332,020,680 beyond which, according to a crude big-O complexity comparison, QLSA is expected to run faster than the best known classical linear-system solving algorithm. For this problem size, a desired calculation accuracy 0.01 requires an approximate circuit width 340 and circuit depth of order 102510^{25} if oracle costs are excluded, and a circuit width and depth of order 10810^8 and 102910^{29}, respectively, if oracle costs are included, indicating that the commonly ignored oracle resources are considerable. In addition to providing detailed logical resource estimates, it is also the purpose of this paper to demonstrate explicitly how these impressively large numbers arise with an actual circuit implementation of a quantum algorithm. While our estimates may prove to be conservative as more efficient advanced quantum-computation techniques are developed, they nevertheless provide a valid baseline for research targeting a reduction of the resource requirements, implying that a reduction by many orders of magnitude is necessary for the algorithm to become practical.Comment: 37 pages, 40 figure

    Advanced software techniques for space shuttle data management systems Final report

    Get PDF
    Airborne/spaceborn computer design and techniques for space shuttle data management system

    Identifying Security-Critical Cyber-Physical Components in Industrial Control Systems

    Get PDF
    In recent years, Industrial Control Systems (ICS) have become an appealing target for cyber attacks, having massive destructive consequences. Security metrics are therefore essential to assess their security posture. In this paper, we present a novel ICS security metric based on AND/OR graphs that represent cyber-physical dependencies among network components. Our metric is able to efficiently identify sets of critical cyber-physical components, with minimal cost for an attacker, such that if compromised, the system would enter into a non-operational state. We address this problem by efficiently transforming the input AND/OR graph-based model into a weighted logical formula that is then used to build and solve a Weighted Partial MAX-SAT problem. Our tool, META4ICS, leverages state-of-the-art techniques from the field of logical satisfiability optimisation in order to achieve efficient computation times. Our experimental results indicate that the proposed security metric can efficiently scale to networks with thousands of nodes and be computed in seconds. In addition, we present a case study where we have used our system to analyse the security posture of a realistic water transport network. We discuss our findings on the plant as well as further security applications of our metric.Comment: Keywords: Security metrics, industrial control systems, cyber-physical systems, AND-OR graphs, MAX-SAT resolutio

    Training-free Measures Based on Algorithmic Probability Identify High Nucleosome Occupancy in DNA Sequences

    Full text link
    We introduce and study a set of training-free methods of information-theoretic and algorithmic complexity nature applied to DNA sequences to identify their potential capabilities to determine nucleosomal binding sites. We test our measures on well-studied genomic sequences of different sizes drawn from different sources. The measures reveal the known in vivo versus in vitro predictive discrepancies and uncover their potential to pinpoint (high) nucleosome occupancy. We explore different possible signals within and beyond the nucleosome length and find that complexity indices are informative of nucleosome occupancy. We compare against the gold standard (Kaplan model) and find similar and complementary results with the main difference that our sequence complexity approach. For example, for high occupancy, complexity-based scores outperform the Kaplan model for predicting binding representing a significant advancement in predicting the highest nucleosome occupancy following a training-free approach.Comment: 8 pages main text (4 figures), 12 total with Supplementary (1 figure
    corecore