266 research outputs found

    Quantum Nonlocal Boxes Exhibit Stronger Distillability

    Full text link
    The hypothetical nonlocal box (\textsf{NLB}) proposed by Popescu and Rohrlich allows two spatially separated parties, Alice and Bob, to exhibit stronger than quantum correlations. If the generated correlations are weak, they can sometimes be distilled into a stronger correlation by repeated applications of the \textsf{NLB}. Motivated by the limited distillability of \textsf{NLB}s, we initiate here a study of the distillation of correlations for nonlocal boxes that output quantum states rather than classical bits (\textsf{qNLB}s). We propose a new protocol for distillation and show that it asymptotically distills a class of correlated quantum nonlocal boxes to the value 1/2(33+1)3.0980761/2 (3\sqrt{3}+1) \approx 3.098076, whereas in contrast, the optimal non-adaptive parity protocol for classical nonlocal boxes asymptotically distills only to the value 3.0. We show that our protocol is an optimal non-adaptive protocol for 1, 2 and 3 \textsf{qNLB} copies by constructing a matching dual solution for the associated primal semidefinite program (SDP). We conclude that \textsf{qNLB}s are a stronger resource for nonlocality than \textsf{NLB}s. The main premise that develops from this conclusion is that the \textsf{NLB} model is not the strongest resource to investigate the fundamental principles that limit quantum nonlocality. As such, our work provides strong motivation to reconsider the status quo of the principles that are known to limit nonlocal correlations under the framework of \textsf{qNLB}s rather than \textsf{NLB}s.Comment: 25 pages, 7 figure

    Higher Order Decompositions of Ordered Operator Exponentials

    Full text link
    We present a decomposition scheme based on Lie-Trotter-Suzuki product formulae to represent an ordered operator exponential as a product of ordinary operator exponentials. We provide a rigorous proof that does not use a time-displacement superoperator, and can be applied to non-analytic functions. Our proof provides explicit bounds on the error and includes cases where the functions are not infinitely differentiable. We show that Lie-Trotter-Suzuki product formulae can still be used for functions that are not infinitely differentiable, but that arbitrary order scaling may not be achieved.Comment: 16 pages, 1 figur

    On the Minimum Degree up to Local Complementation: Bounds and Complexity

    Full text link
    The local minimum degree of a graph is the minimum degree reached by means of a series of local complementations. In this paper, we investigate on this quantity which plays an important role in quantum computation and quantum error correcting codes. First, we show that the local minimum degree of the Paley graph of order p is greater than sqrt{p} - 3/2, which is, up to our knowledge, the highest known bound on an explicit family of graphs. Probabilistic methods allows us to derive the existence of an infinite number of graphs whose local minimum degree is linear in their order with constant 0.189 for graphs in general and 0.110 for bipartite graphs. As regards the computational complexity of the decision problem associated with the local minimum degree, we show that it is NP-complete and that there exists no k-approximation algorithm for this problem for any constant k unless P = NP.Comment: 11 page

    Necessary Condition for the Quantum Adiabatic Approximation

    Get PDF
    A gapped quantum system that is adiabatically perturbed remains approximately in its eigenstate after the evolution. We prove that, for constant gap, general quantum processes that approximately prepare the final eigenstate require a minimum time proportional to the ratio of the length of the eigenstate path to the gap. Thus, no rigorous adiabatic condition can yield a smaller cost. We also give a necessary condition for the adiabatic approximation that depends on local properties of the path, which is appropriate when the gap varies.Comment: 5 pages, 1 figur

    Multipartite Nonlocal Quantum Correlations Resistant to Imperfections

    Full text link
    We use techniques for lower bounds on communication to derive necessary conditions in terms of detector efficiency or amount of super-luminal communication for being able to reproduce with classical local hidden-variable theories the quantum correlations occurring in EPR-type experiments in the presence of noise. We apply our method to an example involving n parties sharing a GHZ-type state on which they carry out measurements and show that for local-hidden variable theories, the amount of super-luminal classical communication c and the detector efficiency eta are constrained by eta 2^(-c/n) = O(n^(-1/6)) even for constant general error probability epsilon = O(1)

    Minimum Degree up to Local Complementation: Bounds, Parameterized Complexity, and Exact Algorithms

    Full text link
    The local minimum degree of a graph is the minimum degree that can be reached by means of local complementation. For any n, there exist graphs of order n which have a local minimum degree at least 0.189n, or at least 0.110n when restricted to bipartite graphs. Regarding the upper bound, we show that for any graph of order n, its local minimum degree is at most 3n/8+o(n) and n/4+o(n) for bipartite graphs, improving the known n/2 upper bound. We also prove that the local minimum degree is smaller than half of the vertex cover number (up to a logarithmic term). The local minimum degree problem is NP-Complete and hard to approximate. We show that this problem, even when restricted to bipartite graphs, is in W[2] and FPT-equivalent to the EvenSet problem, which W[1]-hardness is a long standing open question. Finally, we show that the local minimum degree is computed by a O*(1.938^n)-algorithm, and a O*(1.466^n)-algorithm for the bipartite graphs

    Quantum Network Coding

    Get PDF
    Since quantum information is continuous, its handling is sometimes surprisingly harder than the classical counterpart. A typical example is cloning; making a copy of digital information is straightforward but it is not possible exactly for quantum information. The question in this paper is whether or not quantum network coding is possible. Its classical counterpart is another good example to show that digital information flow can be done much more efficiently than conventional (say, liquid) flow. Our answer to the question is similar to the case of cloning, namely, it is shown that quantum network coding is possible if approximation is allowed, by using a simple network model called Butterfly. In this network, there are two flow paths, s_1 to t_1 and s_2 to t_2, which shares a single bottleneck channel of capacity one. In the classical case, we can send two bits simultaneously, one for each path, in spite of the bottleneck. Our results for quantum network coding include: (i) We can send any quantum state |psi_1> from s_1 to t_1 and |psi_2> from s_2 to t_2 simultaneously with a fidelity strictly greater than 1/2. (ii) If one of |psi_1> and |psi_2> is classical, then the fidelity can be improved to 2/3. (iii) Similar improvement is also possible if |psi_1> and |psi_2> are restricted to only a finite number of (previously known) states. (iv) Several impossibility results including the general upper bound of the fidelity are also given.Comment: 27pages, 11figures. The 12page version will appear in 24th International Symposium on Theoretical Aspects of Computer Science (STACS 2007

    Large-scale 3-D modeling by integration of resistivity models and borehole data through inversion

    Get PDF
    We present an automatic method for parameterization of a 3-D model of the subsurface, integrating lithological information from boreholes with resistivity models through an inverse optimization, with the objective of further detailing of geological models, or as direct input into groundwater models. The parameter of interest is the clay fraction, expressed as the relative length of clay units in a depth interval. The clay fraction is obtained from lithological logs and the clay fraction from the resistivity is obtained by establishing a simple petrophysical relationship, a translator function, between resistivity and the clay fraction. Through inversion we use the lithological data and the resistivity data to determine the optimum spatially distributed translator function. Applying the translator function we get a 3-D clay fraction model, which holds information from the resistivity data set and the borehole data set in one variable. Finally, we use k-means clustering to generate a 3-D model of the subsurface structures. We apply the procedure to the Norsminde survey in Denmark, integrating approximately 700 boreholes and more than 100 000 resistivity models from an airborne survey in the parameterization of the 3-D model covering 156 km2. The final five-cluster 3-D model differentiates between clay materials and different high-resistivity materials from information held in the resistivity model and borehole observations, respectively

    Multiple-point statistical simulation for hydrogeological models: 3D training image development and conditioning strategies

    Get PDF
    Most studies about the application of geostatistical simulations based on multiple-point statistics (MPS) to hydrogeological modelling focus on relatively fine-scale models and concentrate on the estimation of facies-level, structural uncertainty. Much less attention is paid to the use of input data and optimal construction of training images. For instance, even though the training image should capture a set of spatial geological characteristics to guide the simulations, the majority of the research still relies on 2D or quasi-3D training images. In the present study, we demonstrate a novel strategy for 3D MPS modelling characterized by: (i) realistic 3D training images, and (ii) an effective workflow for incorporating a diverse group of geological and geophysical data sets. The study covers an area of 2810 km2 in the southern part of Denmark. MPS simulations are performed on a subset of the geological succession (the lower to middle Miocene sediments) which is characterized by relatively uniform structures and dominated by sand and clay. The simulated domain is large and each of the geostatistical realizations contains approximately 45 million voxels with size 100 m × 100 m × 5 m. Data used for the modelling include water well logs, high-resolution seismic data, and a previously published 3D geological model. We apply a series of different strategies for the simulations based on data quality, and develop a novel method to effectively create observed sand/clay spatial trends. The training image is constructed as a small 3D voxel model covering an area of 90 km2. We use an iterative training image development strategy and find that even slight modifications in the training image create significant changes in simulations. Thus, the study underlines that it is important to consider both the geological environment, and the type and quality of input information in order to achieve optimal results from MPS modelling. In this study we present a possible workflow to build the training image and effectively handle different types of input information to perform large-scale geostatistical modellin

    Fetching marked items from an unsorted database in NMR ensemble computing

    Full text link
    Searching a marked item or several marked items from an unsorted database is a very difficult mathematical problem. Using classical computer, it requires O(N=2n)O(N=2^n) steps to find the target. Using a quantum computer, Grover's algorithm uses O(N=2n)O(\sqrt{N=2^n}) steps. In NMR ensemble computing, Brushweiler's algorithm uses logN\log N steps. In this Letter, we propose an algorithm that fetches marked items in an unsorted database directly. It requires only a single query. It can find a single marked item or multiple number of items.Comment: 4 pages and 1 figur
    corecore