2,561 research outputs found

    An equality between entanglement and uncertainty

    Get PDF
    Heisenberg's uncertainty principle implies that if one party (Alice) prepares a system and randomly measures one of two incompatible observables, then another party (Bob) cannot perfectly predict the measurement outcomes. This implication assumes that Bob does not possess an additional system that is entangled to the measured one; indeed the seminal paper of Einstein, Podolsky and Rosen (EPR) showed that maximal entanglement allows Bob to perfectly win this guessing game. Although not in contradiction, the observations made by EPR and Heisenberg illustrate two extreme cases of the interplay between entanglement and uncertainty. On the one hand, no entanglement means that Bob's predictions must display some uncertainty. Yet on the other hand, maximal entanglement means that there is no more uncertainty at all. Here we follow an operational approach and give an exact relation - an equality - between the amount of uncertainty as measured by the guessing probability, and the amount of entanglement as measured by the recoverable entanglement fidelity. From this equality we deduce a simple criterion for witnessing bipartite entanglement and a novel entanglement monogamy equality.Comment: v2: published as "Entanglement-assisted guessing of complementary measurement outcomes", 11 pages, 1 figur

    Quantile-based bias correction and uncertainty quantification of extreme event attribution statements

    Get PDF
    Extreme event attribution characterizes how anthropogenic climate change may have influenced the probability and magnitude of selected individual extreme weather and climate events. Attribution statements often involve quantification of the fraction of attributable risk (FAR) or the risk ratio (RR) and associated confidence intervals. Many such analyses use climate model output to characterize extreme event behavior with and without anthropogenic influence. However, such climate models may have biases in their representation of extreme events. To account for discrepancies in the probabilities of extreme events between observational datasets and model datasets, we demonstrate an appropriate rescaling of the model output based on the quantiles of the datasets to estimate an adjusted risk ratio. Our methodology accounts for various components of uncertainty in estimation of the risk ratio. In particular, we present an approach to construct a one-sided confidence interval on the lower bound of the risk ratio when the estimated risk ratio is infinity. We demonstrate the methodology using the summer 2011 central US heatwave and output from the Community Earth System Model. In this example, we find that the lower bound of the risk ratio is relatively insensitive to the magnitude and probability of the actual event.Comment: 28 pages, 4 figures, 3 table

    Investigation of exciton properties in organic materials via many-body perturbation theory

    No full text

    Quantum preparation uncertainty and lack of information

    Get PDF
    The quantum uncertainty principle famously predicts that there exist measurements that are inherently incompatible, in the sense that their outcomes cannot be predicted simultaneously. In contrast, no such uncertainty exists in the classical domain, where all uncertainty results from ignorance about the exact state of the physical system. Here, we critically examine the concept of preparation uncertainty and ask whether similarly in the quantum regime, some of the uncertainty that we observe can actually also be understood as a lack of information (LOI), albeit a lack of quantum information. We answer this question affirmatively by showing that for the well known measurements employed in BB84 quantum key distribution, the amount of uncertainty can indeed be related to the amount of available information about additional registers determining the choice of the measurement. We proceed to show that also for other measurements the amount of uncertainty is in part connected to a LOI. Finally, we discuss the conceptual implications of our observation to the security of cryptographic protocols that make use of BB84 states.Comment: 7+15 pages, 4 figures. v2: expanded "Discussion" section, "Methods" section moved before "Results" section, published versio

    Quantifying statistical uncertainty in the attribution of human influence on severe weather

    Get PDF
    Event attribution in the context of climate change seeks to understand the role of anthropogenic greenhouse gas emissions on extreme weather events, either specific events or classes of events. A common approach to event attribution uses climate model output under factual (real-world) and counterfactual (world that might have been without anthropogenic greenhouse gas emissions) scenarios to estimate the probabilities of the event of interest under the two scenarios. Event attribution is then quantified by the ratio of the two probabilities. While this approach has been applied many times in the last 15 years, the statistical techniques used to estimate the risk ratio based on climate model ensembles have not drawn on the full set of methods available in the statistical literature and have in some cases used and interpreted the bootstrap method in non-standard ways. We present a precise frequentist statistical framework for quantifying the effect of sampling uncertainty on estimation of the risk ratio, propose the use of statistical methods that are new to event attribution, and evaluate a variety of methods using statistical simulations. We conclude that existing statistical methods not yet in use for event attribution have several advantages over the widely-used bootstrap, including better statistical performance in repeated samples and robustness to small estimated probabilities. Software for using the methods is available through the climextRemes package available for R or Python. While we focus on frequentist statistical methods, Bayesian methods are likely to be particularly useful when considering sources of uncertainty beyond sampling uncertainty.Comment: 41 pages, 11 figures, 1 tabl

    A time-dependent Tsirelson's bound from limits on the rate of information gain in quantum systems

    Full text link
    We consider the problem of distinguishing between a set of arbitrary quantum states in a setting in which the time available to perform the measurement is limited. We provide simple upper bounds on how well we can perform state discrimination in a given time as a function of either the average energy or the range of energies available during the measurement. We exhibit a specific strategy that nearly attains this bound. Finally, we consider several applications of our result. First, we obtain a time-dependent Tsirelson's bound that limits the extent of the Bell inequality violation that can be in principle be demonstrated in a given time t. Second, we obtain a Margolus-Levitin type bound when considering the special case of distinguishing orthogonal pure states.Comment: 15 pages, revtex, 1 figur

    UCD Candidates in the Hydra Cluster

    Full text link
    NGC 3311, the giant cD galaxy in the Hydra cluster (A1060), has one of the largest globular cluster systems known. We describe new Gemini GMOS (g',i') photometry of the NGC 3311 field which reveals that the red, metal-rich side of its globular cluster population extends smoothly upward into the mass range associated with the new class of Ultra-Compact Dwarfs (UCDs). We identify 29 UCD candidates with estimated masses > 6x10^6 solar masses and discuss their characteristics. This UCD-like sequence is the most well defined one yet seen, and reinforces current ideas that the high-mass end of the globular cluster sequence merges continuously into the UCD sequence, which connects in turn to the E galaxy structural sequence.Comment: 5 pages, 3 figures. Accepted for publication in ApJ Letter

    VALIDATION, OPTIMIZATION, AND IMAGE PROCESSING OF SPIRAL CINE DENSE MAGNETIC RESONANCE IMAGING FOR THE QUANTIFICATION OF LEFT AND RIGHT VENTRICULAR MECHANICS

    Get PDF
    Recent evidence suggests that cardiac mechanics (e.g. cardiac strains) are better measures of heart function compared to common clinical metrics like ejection fraction. However, commonly-used parameters of cardiac mechanics remain limited to just a few measurements averaged over the whole left ventricle. We hypothesized that recent advances in cardiac magnetic resonance imaging (MRI) could be extended to provide measures of cardiac mechanics throughout the left and right ventricles (LV and RV, respectively). Displacement Encoding with Stimulated Echoes (DENSE) is a cardiac MRI technique that has been validated for measuring LV mechanics at a magnetic field strength of 1.5 T but not at higher field strengths such as 3.0 T. However, it is desirable to perform DENSE at 3.0 T, which would yield a better signal to noise ratio for imaging the thin RV wall. Results in Chapter 2 support the hypothesis that DENSE has similar accuracy at 1.5 and 3.0 T. Compared to standard, clinical cardiac MRI, DENSE requires more expertise to perform and is not as widely used. If accurate mechanics could be measured from standard MRI, the need for DENSE would be reduced. However, results from Chapter 3 support the hypothesis that measured cardiac mechanics from standard MRI do not agree with, and thus cannot be used in place of, measurements from DENSE. Imaging the thin RV wall with its complex contraction pattern requires both three-dimensional (3D) measures of myocardial motion and higher resolution imaging. Results from Chapter 4 support the hypothesis that a lower displacement-encoding frequency can be used to allow for easier processing of 3D DENSE images. Results from Chapter 5 support the hypothesis that images with higher resolution (decreased blurring) can be achieved by using more spiral interleaves during the DENSE image acquisition. Finally, processing DENSE images to yield measures of cardiac mechanics in the LV is relatively simple due to the LV’s mostly cylindrical geometry. Results from Chapter 6 support the hypothesis that a local coordinate system can be adapted to the geometry of the RV to quantify mechanics in an equivalent manner as the LV. In summary, cardiac mechanics can now be quantified throughout the left and right ventricles using DENSE cardiac MRI

    Multi-qubit Randomized Benchmarking Using Few Samples

    Full text link
    Randomized benchmarking (RB) is an efficient and robust method to characterize gate errors in quantum circuits. Averaging over random sequences of gates leads to estimates of gate errors in terms of the average fidelity. These estimates are isolated from the state preparation and measurement errors that plague other methods like channel tomography and direct fidelity estimation. A decisive factor in the feasibility of randomized benchmarking is the number of sampled sequences required to obtain rigorous confidence intervals. Previous bounds were either prohibitively loose or required the number of sampled sequences to scale exponentially with the number of qubits in order to obtain a fixed confidence interval at a fixed error rate. Here we show that, with a small adaptation to the randomized benchmarking procedure, the number of sampled sequences required for a fixed confidence interval is dramatically smaller than could previously be justified. In particular, we show that the number of sampled sequences required is essentially independent of the number of qubits and scales favorably with the average error rate of the system under investigation. We also show that the number of samples required for long sequence lengths can be made substantially smaller than previous rigorous results (even for single qubits) as long as the noise process under investigation is not unitary. Our results bring rigorous randomized benchmarking on systems with many qubits into the realm of experimental feasibility.Comment: v3: Added discussion of the impact of variance heteroskedasticity on the RB fitting procedure. Close to published versio

    A strong converse for classical channel coding using entangled inputs

    Full text link
    A fully general strong converse for channel coding states that when the rate of sending classical information exceeds the capacity of a quantum channel, the probability of correctly decoding goes to zero exponentially in the number of channel uses, even when we allow code states which are entangled across several uses of the channel. Such a statement was previously only known for classical channels and the quantum identity channel. By relating the problem to the additivity of minimum output entropies, we show that a strong converse holds for a large class of channels, including all unital qubit channels, the d-dimensional depolarizing channel and the Werner-Holevo channel. This further justifies the interpretation of the classical capacity as a sharp threshold for information-transmission.Comment: 9 pages, revte
    corecore