9,782 research outputs found

    Sharp Total Variation Bounds for Finitely Exchangeable Arrays

    Full text link
    In this article we demonstrate the relationship between finitely exchangeable arrays and finitely exchangeable sequences. We then derive sharp bounds on the total variation distance between distributions of finitely and infinitely exchangeable arrays

    Genuinely multipartite entangled states and orthogonal arrays

    Full text link
    A pure quantum state of N subsystems with d levels each is called k-multipartite maximally entangled state, written k-uniform, if all its reductions to k qudits are maximally mixed. These states form a natural generalization of N-qudits GHZ states which belong to the class 1-uniform states. We establish a link between the combinatorial notion of orthogonal arrays and k-uniform states and prove the existence of several new classes of such states for N-qudit systems. In particular, known Hadamard matrices allow us to explicitly construct 2-uniform states for an arbitrary number of N>5 qubits. We show that finding a different class of 2-uniform states would imply the Hadamard conjecture, so the full classification of 2-uniform states seems to be currently out of reach. Additionally, single vectors of another class of 2-uniform states are one-to-one related to maximal sets of mutually unbiased bases. Furthermore, we establish links between existence of k-uniform states, classical and quantum error correction codes and provide a novel graph representation for such states.Comment: 24 pages, 7 figures. Comments are very welcome

    Simulating Hamiltonians in Quantum Networks: Efficient Schemes and Complexity Bounds

    Get PDF
    We address the problem of simulating pair-interaction Hamiltonians in n node quantum networks where the subsystems have arbitrary, possibly different, dimensions. We show that any pair-interaction can be used to simulate any other by applying sequences of appropriate local control sequences. Efficient schemes for decoupling and time reversal can be constructed from orthogonal arrays. Conditions on time optimal simulation are formulated in terms of spectral majorization of matrices characterizing the coupling parameters. Moreover, we consider a specific system of n harmonic oscillators with bilinear interaction. In this case, decoupling can efficiently be achieved using the combinatorial concept of difference schemes. For this type of interactions we present optimal schemes for inversion.Comment: 19 pages, LaTeX2

    Parity of Sets of Mutually Orthogonal Latin Squares

    Full text link
    Every Latin square has three attributes that can be even or odd, but any two of these attributes determines the third. Hence the parity of a Latin square has an information content of 2 bits. We extend the definition of parity from Latin squares to sets of mutually orthogonal Latin squares (MOLS) and the corresponding orthogonal arrays (OA). Suppose the parity of an OA(k,n)\mathrm{OA}(k,n) has an information content of dim(k,n)\dim(k,n) bits. We show that dim(k,n)(k2)1\dim(k,n) \leq {k \choose 2}-1. For the case corresponding to projective planes we prove a tighter bound, namely dim(n+1,n)(n2)\dim(n+1,n) \leq {n \choose 2} when nn is odd and dim(n+1,n)(n2)1\dim(n+1,n) \leq {n \choose 2}-1 when nn is even. Using the existence of MOLS with subMOLS, we prove that if dim(k,n)=(k2)1\dim(k,n)={k \choose 2}-1 then dim(k,N)=(k2)1\dim(k,N) = {k \choose 2}-1 for all sufficiently large NN. Let the ensemble of an OA\mathrm{OA} be the set of Latin squares derived by interpreting any three columns of the OA as a Latin square. We demonstrate many restrictions on the number of Latin squares of each parity that the ensemble of an OA(k,n)\mathrm{OA}(k,n) can contain. These restrictions depend on nmod4n\mod4 and give some insight as to why it is harder to build projective planes of order n2mod4n \not= 2\mod4 than for n2mod4n \not= 2\mod4. For example, we prove that when n2mod4n \not= 2\mod 4 it is impossible to build an OA(n+1,n)\mathrm{OA}(n+1,n) for which all Latin squares in the ensemble are isotopic (equivalent to each other up to permutation of the rows, columns and symbols)

    Validating Sample Average Approximation Solutions with Negatively Dependent Batches

    Full text link
    Sample-average approximations (SAA) are a practical means of finding approximate solutions of stochastic programming problems involving an extremely large (or infinite) number of scenarios. SAA can also be used to find estimates of a lower bound on the optimal objective value of the true problem which, when coupled with an upper bound, provides confidence intervals for the true optimal objective value and valuable information about the quality of the approximate solutions. Specifically, the lower bound can be estimated by solving multiple SAA problems (each obtained using a particular sampling method) and averaging the obtained objective values. State-of-the-art methods for lower-bound estimation generate batches of scenarios for the SAA problems independently. In this paper, we describe sampling methods that produce negatively dependent batches, thus reducing the variance of the sample-averaged lower bound estimator and increasing its usefulness in defining a confidence interval for the optimal objective value. We provide conditions under which the new sampling methods can reduce the variance of the lower bound estimator, and present computational results to verify that our scheme can reduce the variance significantly, by comparison with the traditional Latin hypercube approach

    Conditional Lower Bounds for Space/Time Tradeoffs

    Full text link
    In recent years much effort has been concentrated towards achieving polynomial time lower bounds on algorithms for solving various well-known problems. A useful technique for showing such lower bounds is to prove them conditionally based on well-studied hardness assumptions such as 3SUM, APSP, SETH, etc. This line of research helps to obtain a better understanding of the complexity inside P. A related question asks to prove conditional space lower bounds on data structures that are constructed to solve certain algorithmic tasks after an initial preprocessing stage. This question received little attention in previous research even though it has potential strong impact. In this paper we address this question and show that surprisingly many of the well-studied hard problems that are known to have conditional polynomial time lower bounds are also hard when concerning space. This hardness is shown as a tradeoff between the space consumed by the data structure and the time needed to answer queries. The tradeoff may be either smooth or admit one or more singularity points. We reveal interesting connections between different space hardness conjectures and present matching upper bounds. We also apply these hardness conjectures to both static and dynamic problems and prove their conditional space hardness. We believe that this novel framework of polynomial space conjectures can play an important role in expressing polynomial space lower bounds of many important algorithmic problems. Moreover, it seems that it can also help in achieving a better understanding of the hardness of their corresponding problems in terms of time
    corecore