391 research outputs found

    Quantum information can be negative

    Full text link
    Given an unknown quantum state distributed over two systems, we determine how much quantum communication is needed to transfer the full state to one system. This communication measures the "partial information" one system needs conditioned on it's prior information. It turns out to be given by an extremely simple formula, the conditional entropy. In the classical case, partial information must always be positive, but we find that in the quantum world this physical quantity can be negative. If the partial information is positive, its sender needs to communicate this number of quantum bits to the receiver; if it is negative, the sender and receiver instead gain the corresponding potential for future quantum communication. We introduce a primitive "quantum state merging" which optimally transfers partial information. We show how it enables a systematic understanding of quantum network theory, and discuss several important applications including distributed compression, multiple access channels and multipartite assisted entanglement distillation (localizable entanglement). Negative channel capacities also receive a natural interpretation

    Quantum state merging and negative information

    Full text link
    We consider a quantum state shared between many distant locations, and define a quantum information processing primitive, state merging, that optimally merges the state into one location. As announced in [Horodecki, Oppenheim, Winter, Nature 436, 673 (2005)], the optimal entanglement cost of this task is the conditional entropy if classical communication is free. Since this quantity can be negative, and the state merging rate measures partial quantum information, we find that quantum information can be negative. The classical communication rate also has a minimum rate: a certain quantum mutual information. State merging enabled one to solve a number of open problems: distributed quantum data compression, quantum coding with side information at the decoder and sender, multi-party entanglement of assistance, and the capacity of the quantum multiple access channel. It also provides an operational proof of strong subadditivity. Here, we give precise definitions and prove these results rigorously.Comment: 23 pages, 3 figure

    A new global GPS dataset for testing and improving modelled GIA uplift rates

    Get PDF
    We have produced a global dataset of ~4000 GPS vertical velocities that can be used as observational estimates of glacial isostatic adjustment (GIA) uplift rates. GIA is the response of the solid Earth to past ice loading, primarily, since the Last Glacial Maximum, about 20 K yrs BP. Modelling GIA is challenging because of large uncertainties in ice loading history and also the viscosity of the upper and lower mantle. GPS data contain the signature of GIA in their uplift rates but these also contain other sources of vertical land motion (VLM) such as tectonics, human and natural influences on water storage that can mask the underlying GIA signal. A novel fully-automatic strategy was developed to post-process the GPS time series and to correct for non-GIA artefacts. Before estimating vertical velocities and uncertainties, we detected outliers and jumps and corrected for atmospheric mass loading displacements. We corrected the resulting velocities for the elastic response of the solid Earth to global changes in ice sheets, glaciers, and ocean loading, as well as for changes in the Earth's rotational pole relative to the 20th century average. We then applied a spatial median filter to remove sites where local effects were dominant to leave approximately 4000 GPS sites. The resulting novel global GPS dataset shows a clean GIA signal at all post-processed stations and is suitable to investigate the behaviour of global GIA forward models. The results are transformed from a frame with its origin in the centre of mass of the total Earth's system (CM) into a frame with its origin in the centre of mass of the solid Earth (CE) before comparison with 13 global GIA forward model solutions, with best fits with Pur-6-VM5 and ICE-6G predictions. The largest discrepancies for all models were identified for Antarctica and Greenland, which may be due to either uncertain mantle rheology, ice loading history/magnitude and/or GPS errors

    A new global GPS dataset for testing and improving modelled GIA uplift rates

    Get PDF
    We have produced a global dataset of ~4000 GPS vertical velocities that can be used as observational estimates of glacial isostatic adjustment (GIA) uplift rates. GIA is the response of the solid Earth to past ice loading, primarily, since the Last Glacial Maximum, about 20 K yrs BP. Modelling GIA is challenging because of large uncertainties in ice loading history and also the viscosity of the upper and lower mantle. GPS data contain the signature of GIA in their uplift rates but these also contain other sources of vertical land motion (VLM) such as tectonics, human and natural influences on water storage that can mask the underlying GIA signal. A novel fully-automatic strategy was developed to post-process the GPS time series and to correct for non-GIA artefacts. Before estimating vertical velocities and uncertainties, we detected outliers and jumps and corrected for atmospheric mass loading displacements. We corrected the resulting velocities for the elastic response of the solid Earth to global changes in ice sheets, glaciers, and ocean loading, as well as for changes in the Earth's rotational pole relative to the 20th century average. We then applied a spatial median filter to remove sites where local effects were dominant to leave approximately 4000 GPS sites. The resulting novel global GPS dataset shows a clean GIA signal at all post-processed stations and is suitable to investigate the behaviour of global GIA forward models. The results are transformed from a frame with its origin in the centre of mass of the total Earth's system (CM) into a frame with its origin in the centre of mass of the solid Earth (CE) before comparison with 13 global GIA forward model solutions, with best fits with Pur-6-VM5 and ICE-6G predictions. The largest discrepancies for all models were identified for Antarctica and Greenland, which may be due to either uncertain mantle rheology, ice loading history/magnitude and/or GPS errors

    Characterizing mixing and measurement in quantum mechanics

    Get PDF
    What fundamental constraints characterize the relationship between a mixture ρ=ipiρi\rho = \sum_i p_i \rho_i of quantum states, the states ρi\rho_i being mixed, and the probabilities pip_i? What fundamental constraints characterize the relationship between prior and posterior states in a quantum measurement? In this paper we show that there are many surprisingly strong constraints on these mixing and measurement processes that can be expressed simply in terms of the eigenvalues of the quantum states involved. These constraints capture in a succinct fashion what it means to say that a quantum measurement acquires information about the system being measured, and considerably simplify the proofs of many results about entanglement transformation.Comment: 12 page

    Entanglement Creation Using Quantum Interrogation

    Get PDF
    We present some applications of high efficiency quantum interrogation ("interaction free measurement") for the creation of entangled states of separate atoms and of separate photons. The quantum interrogation of a quantum object in a superposition of object-in and object-out leaves the object and probe in an entangled state. The probe can then be further entangled with other objects in subsequent quantum interrogations. By then projecting out those cases were the probe is left in a particular final state, the quantum objects can themselves be left in various entangled states. In this way we show how to generate two-, three-, and higher qubit entanglement between atoms and between photons. The effect of finite efficiency for the quantum interrogation is delineated for the various schemes.Comment: 7 pages, 13 figures, Submitted to PR

    Mitochondrial genomics reveals the evolutionary history of the porpoises (Phocoenidae) across the speciation continuum

    Get PDF
    Historical variation in food resources is expected to be a major driver of cetacean evolution, especially for the smallest species like porpoises. Despite major conservation issues among porpoise species (e.g., vaquita and finless), their evolutionary history remains understudied. Here, we reconstructed their evolutionary history across the speciation continuum. Phylogenetic analyses of 63 mitochondrial genomes suggest that porpoises radiated during the deep environmental changes of the Pliocene. However, all intra-specific subdivisions were shaped during the Quaternary glaciations. We observed analogous evolutionary patterns in both hemispheres associated with convergent evolution to coastal versus oceanic environments. This suggests that similar mechanisms are driving species diversification in northern (harbor and Dall's) and southern species (spectacled and Burmeister's). In contrast to previous studies, spectacled and Burmeister's porpoises shared a more recent common ancestor than with the vaquita that diverged from southern species during the Pliocene. The low genetic diversity observed in the vaquita carried signatures of a very low population size since the last 5,000 years. Cryptic lineages within Dall's, spectacled and Pacific harbor porpoises suggest a richer evolutionary history than previously suspected. These results provide a new perspective on the mechanisms driving diversification in porpoises and an evolutionary framework for their conservation
    corecore