2,801 research outputs found

    The conditions for quantum violation of macroscopic realism

    Full text link
    Why do we not experience a violation of macroscopic realism in every-day life? Normally, no violation can be seen either because of decoherence or the restriction of coarse-grained measurements, transforming the time evolution of any quantum state into a classical time evolution of a statistical mixture. We find the sufficient condition for these classical evolutions for spin systems under coarse-grained measurements. Then we demonstrate that there exist "non-classical" Hamiltonians whose time evolution cannot be understood classically, although at every instant of time the quantum spin state appears as a classical mixture. We suggest that such Hamiltonians are unlikely to be realized in nature because of their high computational complexity.Comment: 4 pages, 2 figures, revised version, journal reference adde

    Addressing the clumsiness loophole in a Leggett-Garg test of macrorealism

    Get PDF
    The rise of quantum information theory has lent new relevance to experimental tests for non-classicality, particularly in controversial cases such as adiabatic quantum computing superconducting circuits. The Leggett-Garg inequality is a "Bell inequality in time" designed to indicate whether a single quantum system behaves in a macrorealistic fashion. Unfortunately, a violation of the inequality can only show that the system is either (i) non-macrorealistic or (ii) macrorealistic but subjected to a measurement technique that happens to disturb the system. The "clumsiness" loophole (ii) provides reliable refuge for the stubborn macrorealist, who can invoke it to brand recent experimental and theoretical work on the Leggett-Garg test inconclusive. Here, we present a revised Leggett-Garg protocol that permits one to conclude that a system is either (i) non-macrorealistic or (ii) macrorealistic but with the property that two seemingly non-invasive measurements can somehow collude and strongly disturb the system. By providing an explicit check of the invasiveness of the measurements, the protocol replaces the clumsiness loophole with a significantly smaller "collusion" loophole.Comment: 7 pages, 3 figure

    Entanglement between smeared field operators in the Klein-Gordon vacuum

    Full text link
    Quantum field theory is the application of quantum physics to fields. It provides a theoretical framework widely used in particle physics and condensed matter physics. One of the most distinct features of quantum physics with respect to classical physics is entanglement or the existence of strong correlations between subsystems that can even be spacelike separated. In quantum fields, observables restricted to a region of space define a subsystem. While there are proofs on the existence of local observables that would allow a violation of Bell's inequalities in the vacuum states of quantum fields as well as some explicit but technically demanding schemes requiring an extreme fine-tuning of the interaction between the fields and detectors, an experimentally accessible entanglement witness for quantum fields is still missing. Here we introduce smeared field operators which allow reducing the vacuum to a system of two effective bosonic modes. The introduction of such collective observables is motivated by the fact that no physical probe has access to fields in single spatial (mathematical) points but rather smeared over finite volumes. We first give explicit collective observables whose correlations reveal vacuum entanglement in the Klein-Gordon field. We then show that the critical distance between the two regions of space above which two effective bosonic modes become separable is of the order of the Compton wavelength of the particle corresponding to the massive Klein-Gordon field.Comment: 21 pages, 11 figure

    Logical independence and quantum randomness

    Full text link
    We propose a link between logical independence and quantum physics. We demonstrate that quantum systems in the eigenstates of Pauli group operators are capable of encoding mathematical axioms and show that Pauli group quantum measurements are capable of revealing whether or not a given proposition is logically dependent on the axiomatic system. Whenever a mathematical proposition is logically independent of the axioms encoded in the measured state, the measurement associated with the proposition gives random outcomes. This allows for an experimental test of logical independence. Conversely, it also allows for an explanation of the probabilities of random outcomes observed in Pauli group measurements from logical independence without invoking quantum theory. The axiomatic systems we study can be completed and are therefore not subject to Goedel's incompleteness theorem.Comment: 9 pages, 4 figures, published version plus additional experimental appendi

    Quantum Optical Experiments Modeled by Long Short-Term Memory

    Get PDF
    We demonstrate how machine learning is able to model experiments in quantum physics. Quantum entanglement is a cornerstone for upcoming quantum technologies such as quantum computation and quantum cryptography. Of particular interest are complex quantum states with more than two particles and a large number of entangled quantum levels. Given such a multiparticle high-dimensional quantum state, it is usually impossible to reconstruct an experimental setup that produces it. To search for interesting experiments, one thus has to randomly create millions of setups on a computer and calculate the respective output states. In this work, we show that machine learning models can provide significant improvement over random search. We demonstrate that a long short-term memory (LSTM) neural network can successfully learn to model quantum experiments by correctly predicting output state characteristics for given setups without the necessity of computing the states themselves. This approach not only allows for faster search but is also an essential step towards automated design of multiparticle high-dimensional quantum experiments using generative machine learning models

    Neural networks-based regularization for large-scale medical image reconstruction

    Get PDF
    In this paper we present a generalized Deep Learning-based approach for solving ill-posed large-scale inverse problems occuring in medical image reconstruction. Recently, Deep Learning methods using iterative neural networks (NNs) and cascaded NNs have been reported to achieve state-of-the-art results with respect to various quantitative quality measures as PSNR, NRMSE and SSIM across different imaging modalities. However, the fact that these approaches employ the application of the forward and adjoint operators repeatedly in the network architecture requires the network to process the whole images or volumes at once, which for some applications is computationally infeasible. In this work, we follow a different reconstruction strategy by strictly separating the application of the NN, the regularization of the solution and the consistency with the measured data. The regularization is given in the form of an image prior obtained by the output of a previously trained NN which is used in a Tikhonov regularization framework. By doing so, more complex and sophisticated network architectures can be used for the removal of the artefacts or noise than it is usually the case in iterative NNs. Due to the large scale of the considered problems and the resulting computational complexity of the employed networks, the priors are obtained by processing the images or volumes as patches or slices. We evaluated the method for the cases of 3D cone-beam low dose CT and undersampled 2D radial cine MRI and compared it to a total variation-minimization-based reconstruction algorithm as well as to a method with regularization based on learned overcomplete dictionaries. The proposed method outperformed all the reported methods with respect to all chosen quantitative measures and further accelerates the regularization step in the reconstruction by several orders of magnitude
    corecore