129 research outputs found

    Entanglement verification with finite data

    Get PDF
    Suppose an experimentalist wishes to verify that his apparatus produces entangled quantum states. A finite amount of data cannot conclusively demonstrate entanglement, so drawing conclusions from real-world data requires statistical reasoning. We propose a reliable method to quantify the weight of evidence for (or against) entanglement, based on a likelihood ratio test. Our method is universal in that it can be applied to any sort of measurements. We demonstrate the method by applying it to two simulated experiments on two qubits. The first measures a single entanglement witness, while the second performs a tomographically complete measurement.Comment: 4 pages, 3 pretty picture

    A Smooth Transition from Powerlessness to Absolute Power

    Get PDF
    We study the phase transition of the coalitional manipulation problem for generalized scoring rules. Previously it has been shown that, under some conditions on the distribution of votes, if the number of manipulators is o(n)o(\sqrt{n}), where nn is the number of voters, then the probability that a random profile is manipulable by the coalition goes to zero as the number of voters goes to infinity, whereas if the number of manipulators is ω(n)\omega(\sqrt{n}), then the probability that a random profile is manipulable goes to one. Here we consider the critical window, where a coalition has size cnc\sqrt{n}, and we show that as cc goes from zero to infinity, the limiting probability that a random profile is manipulable goes from zero to one in a smooth fashion, i.e., there is a smooth phase transition between the two regimes. This result analytically validates recent empirical results, and suggests that deciding the coalitional manipulation problem may be of limited computational hardness in practice.Comment: 22 pages; v2 contains minor changes and corrections; v3 contains minor changes after comments of reviewer

    Distribution-Sensitive Bounds on Relative Approximations of Geometric Ranges

    Get PDF
    A family R of ranges and a set X of points, all in R^d, together define a range space (X, R|_X), where R|_X = {X cap h | h in R}. We want to find a structure to estimate the quantity |X cap h|/|X| for any range h in R with the (rho, epsilon)-guarantee: (i) if |X cap h|/|X| > rho, the estimate must have a relative error epsilon; (ii) otherwise, the estimate must have an absolute error rho epsilon. The objective is to minimize the size of the structure. Currently, the dominant solution is to compute a relative (rho, epsilon)-approximation, which is a subset of X with O~(lambda/(rho epsilon^2)) points, where lambda is the VC-dimension of (X, R|_X), and O~ hides polylog factors. This paper shows a more general bound sensitive to the content of X. We give a structure that stores O(log (1/rho)) integers plus O~(theta * (lambda/epsilon^2)) points of X, where theta - called the disagreement coefficient - measures how much the ranges differ from each other in their intersections with X. The value of theta is between 1 and 1/rho, such that our space bound is never worse than that of relative (rho, epsilon)-approximations, but we improve the latter\u27s 1/rho term whenever theta = o(1/(rho log (1/rho))). We also prove that, in the worst case, summaries with the (rho, 1/2)-guarantee must consume Omega(theta) words even for d = 2 and lambda <=3. We then constrain R to be the set of halfspaces in R^d for a constant d, and prove the existence of structures with o(1/(rho epsilon^2)) size offering (rho,epsilon)-guarantees, when X is generated from various stochastic distributions. This is the first formal justification on why the term 1/rho is not compulsory for "realistic" inputs

    Contrastive Moments: Unsupervised Halfspace Learning in Polynomial Time

    Full text link
    We give a polynomial-time algorithm for learning high-dimensional halfspaces with margins in dd-dimensional space to within desired TV distance when the ambient distribution is an unknown affine transformation of the dd-fold product of an (unknown) symmetric one-dimensional logconcave distribution, and the halfspace is introduced by deleting at least an ϵ\epsilon fraction of the data in one of the component distributions. Notably, our algorithm does not need labels and establishes the unique (and efficient) identifiability of the hidden halfspace under this distributional assumption. The sample and time complexity of the algorithm are polynomial in the dimension and 1/ϵ1/\epsilon. The algorithm uses only the first two moments of suitable re-weightings of the empirical distribution, which we call contrastive moments; its analysis uses classical facts about generalized Dirichlet polynomials and relies crucially on a new monotonicity property of the moment ratio of truncations of logconcave distributions. Such algorithms, based only on first and second moments were suggested in earlier work, but hitherto eluded rigorous guarantees. Prior work addressed the special case when the underlying distribution is Gaussian via Non-Gaussian Component Analysis. We improve on this by providing polytime guarantees based on Total Variation (TV) distance, in place of existing moment-bound guarantees that can be super-polynomial. Our work is also the first to go beyond Gaussians in this setting.Comment: Preliminary version in NeurIPS 202

    Explicit Optimal Hardness via Gaussian stability results

    Get PDF
    The results of Raghavendra (2008) show that assuming Khot's Unique Games Conjecture (2002), for every constraint satisfaction problem there exists a generic semi-definite program that achieves the optimal approximation factor. This result is existential as it does not provide an explicit optimal rounding procedure nor does it allow to calculate exactly the Unique Games hardness of the problem. Obtaining an explicit optimal approximation scheme and the corresponding approximation factor is a difficult challenge for each specific approximation problem. An approach for determining the exact approximation factor and the corresponding optimal rounding was established in the analysis of MAX-CUT (KKMO 2004) and the use of the Invariance Principle (MOO 2005). However, this approach crucially relies on results explicitly proving optimal partitions in Gaussian space. Until recently, Borell's result (Borell 1985) was the only non-trivial Gaussian partition result known. In this paper we derive the first explicit optimal approximation algorithm and the corresponding approximation factor using a new result on Gaussian partitions due to Isaksson and Mossel (2012). This Gaussian result allows us to determine exactly the Unique Games Hardness of MAX-3-EQUAL. In particular, our results show that Zwick algorithm for this problem achieves the optimal approximation factor and prove that the approximation achieved by the algorithm is ≈0.796\approx 0.796 as conjectured by Zwick. We further use the previously known optimal Gaussian partitions results to obtain a new Unique Games Hardness factor for MAX-k-CSP : Using the well known fact that jointly normal pairwise independent random variables are fully independent, we show that the the UGC hardness of Max-k-CSP is ⌈(k+1)/2⌉2k−1\frac{\lceil (k+1)/2 \rceil}{2^{k-1}}, improving on results of Austrin and Mossel (2009)
    • …
    corecore