48 research outputs found

    Locking classical information

    Full text link
    It is known that the maximum classical mutual information that can be achieved between measurements on a pair of quantum systems can drastically underestimate the quantum mutual information between those systems. In this article, we quantify this distinction between classical and quantum information by demonstrating that after removing a logarithmic-sized quantum system from one half of a pair of perfectly correlated bitstrings, even the most sensitive pair of measurements might only yield outcomes essentially independent of each other. This effect is a form of information locking but the definition we use is strictly stronger than those used previously. Moreover, we find that this property is generic, in the sense that it occurs when removing a random subsystem. As such, the effect might be relevant to statistical mechanics or black hole physics. Previous work on information locking had always assumed a uniform message. In this article, we assume only a min-entropy bound on the message and also explore the effect of entanglement. We find that classical information is strongly locked almost until it can be completely decoded. As a cryptographic application of these results, we exhibit a quantum key distribution protocol that is "secure" if the eavesdropper's information about the secret key is measured using the accessible information but in which leakage of even a logarithmic number of key bits compromises the secrecy of all the others.Comment: 32 pages, 2 figure

    Speed-up via Quantum Sampling

    Get PDF
    The Markov Chain Monte Carlo method is at the heart of efficient approximation schemes for a wide range of problems in combinatorial enumeration and statistical physics. It is therefore very natural and important to determine whether quantum computers can speed-up classical mixing processes based on Markov chains. To this end, we present a new quantum algorithm, making it possible to prepare a quantum sample, i.e., a coherent version of the stationary distribution of a reversible Markov chain. Our algorithm has a significantly better running time than that of a previous algorithm based on adiabatic state generation. We also show that our methods provide a speed-up over a recently proposed method for obtaining ground states of (classical) Hamiltonians.Comment: 8 pages, fixed some minor typo

    Development and initial validation of a sensory threshold examination protocol (STEP) for phenotyping canine pain syndromes

    Get PDF
    Objective To study feasibility and test-retest repeatability of a sensory threshold examination protocol (STEP) and report quantitative sensory threshold distributions in healthy dogs. Study design Prospective, observational, cohort study. Animals Twenty-five healthy client-owned dogs. Methods Tactile sensitivity (TST) (von Frey filaments), mechanical thresholds (MT with 2, 4 and 8 mm probes), heat thresholds (HT) and responsiveness to cold stimulus (CT at 0 °C) were quantitatively assessed for five body areas (BA: tibias, humeri, neck, thoracolumbar region and abdomen) in a randomized order on three different occasions. Linear Mixed Model and Generalised Linear Mixed models were used to evaluate the effects of body weight category, age, sex, BA, occasion, feasibility score and investigator experience. Test-retest repeatability was evaluated with the Intra-class Correlation Coefficient (ICC). Results The STEP lasted 90 minutes without side effects. The BA affected most tests (p = 0.001). Higher thresholds and longer cold latencies were scored in the neck (p = 0.024) compared to other BAs. Weight category affected all thresholds (p = 0.037). Small dogs had lower MT (~1.4 N mean difference) and HT (1.1 0C mean difference) than other dogs (p = 0.029). Young dogs had higher HT than adults (2.2 0C mean difference) (p = 0.035). Gender also affected TST, MT and HT (p < 0.05) (females versus males: TST OR= 0.5, MT= 1.3 N mean difference, HT= 2.2 0C mean difference). Repeatability was substantial to moderate for all tests, but poor for TST. There was no difference in thresholds between occasions, except for CT. Test-retest repeatability was slightly better with the 2 mm MT probe compared to other diameters and improved with operator experience. Conclusions and clinical relevance The STEP was feasible, well tolerated and showed substantial test-retest repeatability in healthy dogs. Further validation is needed in dogs suffering pain

    Prioritisation of companion dog welfare issues using expert consensus

    No full text
    Resources for tackling animal welfare issues are often limited. Obtaining a consensus of expert opinion on the most pressing issues to address is a valuable approach to try to ensure that resources are wisely spent. In this study, seven independent experts in a range of disciplines (including veterinary medicine, animal behaviour and welfare science and ethics) were consulted on the relative prioritisation of welfare issues impacting companion dogs in Great Britain. Experts first anonymously ranked the priority of 37 welfare issues, pre-defined from a literature review and an earlier published survey. In a subsequent two-day panel workshop, experts refined these issues into 25 composite groups and used specific criteria to agree their relative priorities as a Welfare Problem (WP; incorporating numbers of dogs affected, severity, duration and counter-balancing benefits) and a Strategic Priority (SP; a combination of WP and tractability). Other criteria — anthropogenicity, ethical significance and confidence in the issue-relevant evidence — were also discussed by the panel. Issues that scored highly for both WP and SP were: inappropriate husbandry, lack of owner knowledge, undesirable behaviours, inherited disease, inappropriate socialisation and habituation and conformation-related disorders. Other welfare issues, such as obese and overweight dogs, were judged as being important for welfare (WP) but not strategic priorities (SP), due to the expert-perceived difficulties in their management and resolution. This information can inform decisions on where future resources can most cost-effectively be targeted, to bring about the greatest improvement in companion dog welfare in Great Britain

    Pregabalin for the treatment of syringomyelia-associated neuropathic pain in dogs: A randomised, placebo-controlled, double-masked clinical trial

    Get PDF
    Pregabalin is the first-line treatment for neuropathic pain (NeP) in humans. Dogs with Chiari-like malformation and syringomyelia (CM/SM) associated with NeP could benefit from pregabalin. The aim of this study was to evaluate the efficacy of pregabalin for NeP in dogs with CM/SM. Eight dogs with symptomatic CM/SM were included in a double-masked, randomised, crossover placebo–controlled clinical trial. All dogs received anti-inflammatory drugs as base-line treatment during placebo or pregabalin phase of 14 ± 4 days each. Analgesic efficacy was assessed with a daily numerical rating scale (NRS) recorded by dog owners (0–10, 10 = worst pain) and quantitative sensory testing at baseline, placebo and pregabalin phases. Blood samples were collected to report pregabalin exposure and to assess renal function. Daily NRS scores recorded by dog owners in the pregabalin group were lower than in the placebo group (P = 0.006). Mechanical thresholds were higher with pregabalin compared to baseline or placebo (P = 0.037, P < 0.001). Cold latency at 15 °C was prolonged on the neck and humeri with pregabalin compared to baseline (P < 0.001 for both) or placebo (P = 0.02, P = 0.0001). Cold latency at 0 °C was longer on pregabalin compared to baseline and placebo (P = 0.001, P = 0.004). There was no pregabalin accumulation between first and last dose. This study demonstrates the efficacy of pregabalin for the treatment of NeP due to CM/SM on daily pain scores recorded by dog owners. Pregabalin significantly reduced mechanical hyperalgesia, cold hyperalgesia (0 °C) and allodynia (15 °C) compared to placebo. Pregabalin was non-cumulative and well tolerated with occasional mild sedation

    Black holes as mirrors: quantum information in random subsystems

    Get PDF
    We study information retrieval from evaporating black holes, assuming that the internal dynamics of a black hole is unitary and rapidly mixing, and assuming that the retriever has unlimited control over the emitted Hawking radiation. If the evaporation of the black hole has already proceeded past the "half-way" point, where half of the initial entropy has been radiated away, then additional quantum information deposited in the black hole is revealed in the Hawking radiation very rapidly. Information deposited prior to the half-way point remains concealed until the half-way point, and then emerges quickly. These conclusions hold because typical local quantum circuits are efficient encoders for quantum error-correcting codes that nearly achieve the capacity of the quantum erasure channel. Our estimate of a black hole's information retention time, based on speculative dynamical assumptions, is just barely compatible with the black hole complementarity hypothesis.Comment: 18 pages, 2 figures. (v2): discussion of decoding complexity clarifie

    Generalized remote state preparation: Trading cbits, qubits and ebits in quantum communication

    Get PDF
    We consider the problem of communicating quantum states by simultaneously making use of a noiseless classical channel, a noiseless quantum channel and shared entanglement. We specifically study the version of the problem in which the sender is given knowledge of the state to be communicated. In this setting, a trade-off arises between the three resources, some portions of which have been investigated previously in the contexts of the quantum-classical trade-off in data compression, remote state preparation and superdense coding of quantum states, each of which amounts to allowing just two out of these three resources. We present a formula for the triple resource trade-off that reduces its calculation to evaluating the data compression trade-off formula. In the process, we also construct protocols achieving all the optimal points. These turn out to be achievable by trade-off coding and suitable time-sharing between optimal protocols for cases involving two resources out of the three mentioned above.Comment: 15 pages, 2 figures, 1 tabl

    The Quantum Reverse Shannon Theorem based on One-Shot Information Theory

    Full text link
    The Quantum Reverse Shannon Theorem states that any quantum channel can be simulated by an unlimited amount of shared entanglement and an amount of classical communication equal to the channel's entanglement assisted classical capacity. In this paper, we provide a new proof of this theorem, which has previously been proved by Bennett, Devetak, Harrow, Shor, and Winter. Our proof has a clear structure being based on two recent information-theoretic results: one-shot Quantum State Merging and the Post-Selection Technique for quantum channels.Comment: 30 pages, 4 figures, published versio

    Exponential Decay of Correlations Implies Area Law

    Full text link
    We prove that a finite correlation length, i.e. exponential decay of correlations, implies an area law for the entanglement entropy of quantum states defined on a line. The entropy bound is exponential in the correlation length of the state, thus reproducing as a particular case Hastings proof of an area law for groundstates of 1D gapped Hamiltonians. As a consequence, we show that 1D quantum states with exponential decay of correlations have an efficient classical approximate description as a matrix product state of polynomial bond dimension, thus giving an equivalence between injective matrix product states and states with a finite correlation length. The result can be seen as a rigorous justification, in one dimension, of the intuition that states with exponential decay of correlations, usually associated with non-critical phases of matter, are simple to describe. It also has implications for quantum computing: It shows that unless a pure state quantum computation involves states with long-range correlations, decaying at most algebraically with the distance, it can be efficiently simulated classically. The proof relies on several previous tools from quantum information theory - including entanglement distillation protocols achieving the hashing bound, properties of single-shot smooth entropies, and the quantum substate theorem - and also on some newly developed ones. In particular we derive a new bound on correlations established by local random measurements, and we give a generalization to the max-entropy of a result of Hastings concerning the saturation of mutual information in multiparticle systems. The proof can also be interpreted as providing a limitation on the phenomenon of data hiding in quantum states.Comment: 35 pages, 6 figures; v2 minor corrections; v3 published versio
    corecore