137 research outputs found

    Leibniz International Proceedings in Information, LIPIcs

    Get PDF
    Smallest enclosing spheres of finite point sets are central to methods in topological data analysis. Focusing on Bregman divergences to measure dissimilarity, we prove bounds on the location of the center of a smallest enclosing sphere. These bounds depend on the range of radii for which Bregman balls are convex

    Algorithmic Superactivation of Asymptotic Quantum Capacity of Zero-Capacity Quantum Channels

    Full text link
    The superactivation of zero-capacity quantum channels makes it possible to use two zero-capacity quantum channels with a positive joint capacity for their output. Currently, we have no theoretical background to describe all possible combinations of superactive zero-capacity channels; hence, there may be many other possible combinations. In practice, to discover such superactive zero-capacity channel-pairs, we must analyze an extremely large set of possible quantum states, channel models, and channel probabilities. There is still no extremely efficient algorithmic tool for this purpose. This paper shows an efficient algorithmical method of finding such combinations. Our method can be a very valuable tool for improving the results of fault-tolerant quantum computation and possible communication techniques over very noisy quantum channels.Comment: 35 pages, 17 figures, Journal-ref: Information Sciences (Elsevier, 2012), presented in part at Quantum Information Processing 2012 (QIP2012), v2: minor changes, v3: published version; Information Sciences, Elsevier, ISSN: 0020-0255; 201

    Klee sets and Chebyshev centers for the right Bregman distance

    Get PDF
    We systematically investigate the farthest distance function, farthest points, Klee sets, and Chebyshev centers, with respect to Bregman distances induced by Legendre functions. These objects are of considerable interest in Information Geometry and Machine Learning; when the Legendre function is specialized to the energy, one obtains classical notions from Approximation Theory and Convex Analysis. The contribution of this paper is twofold. First, we provide an affirmative answer to a recently-posed question on whether or not every Klee set with respect to the right Bregman distance is a singleton. Second, we prove uniqueness of the Chebyshev center and we present a characterization that relates to previous works by Garkavi, by Klee, and by Nielsen and Nock.Comment: 23 pages, 2 figures, 14 image

    A directed isoperimetric inequality with application to Bregman near neighbor lower bounds

    Full text link
    Bregman divergences DϕD_\phi are a class of divergences parametrized by a convex function ϕ\phi and include well known distance functions like ℓ22\ell_2^2 and the Kullback-Leibler divergence. There has been extensive research on algorithms for problems like clustering and near neighbor search with respect to Bregman divergences, in all cases, the algorithms depend not just on the data size nn and dimensionality dd, but also on a structure constant μ≥1\mu \ge 1 that depends solely on ϕ\phi and can grow without bound independently. In this paper, we provide the first evidence that this dependence on μ\mu might be intrinsic. We focus on the problem of approximate near neighbor search for Bregman divergences. We show that under the cell probe model, any non-adaptive data structure (like locality-sensitive hashing) for cc-approximate near-neighbor search that admits rr probes must use space Ω(n1+μcr)\Omega(n^{1 + \frac{\mu}{c r}}). In contrast, for LSH under ℓ1\ell_1 the best bound is Ω(n1+1cr)\Omega(n^{1+\frac{1}{cr}}). Our new tool is a directed variant of the standard boolean noise operator. We show that a generalization of the Bonami-Beckner hypercontractivity inequality exists "in expectation" or upon restriction to certain subsets of the Hamming cube, and that this is sufficient to prove the desired isoperimetric inequality that we use in our data structure lower bound. We also present a structural result reducing the Hamming cube to a Bregman cube. This structure allows us to obtain lower bounds for problems under Bregman divergences from their ℓ1\ell_1 analog. In particular, we get a (weaker) lower bound for approximate near neighbor search of the form Ω(n1+1cr)\Omega(n^{1 + \frac{1}{cr}}) for an rr-query non-adaptive data structure, and new cell probe lower bounds for a number of other near neighbor questions in Bregman space.Comment: 27 page

    Revisiting Chernoff Information with Likelihood Ratio Exponential Families

    Full text link
    The Chernoff information between two probability measures is a statistical divergence measuring their deviation defined as their maximally skewed Bhattacharyya distance. Although the Chernoff information was originally introduced for bounding the Bayes error in statistical hypothesis testing, the divergence found many other applications due to its empirical robustness property found in applications ranging from information fusion to quantum information. From the viewpoint of information theory, the Chernoff information can also be interpreted as a minmax symmetrization of the Kullback--Leibler divergence. In this paper, we first revisit the Chernoff information between two densities of a measurable Lebesgue space by considering the exponential families induced by their geometric mixtures: The so-called likelihood ratio exponential families. Second, we show how to (i) solve exactly the Chernoff information between any two univariate Gaussian distributions or get a closed-form formula using symbolic computing, (ii) report a closed-form formula of the Chernoff information of centered Gaussians with scaled covariance matrices and (iii) use a fast numerical scheme to approximate the Chernoff information between any two multivariate Gaussian distributions.Comment: 41 page

    Approximate Bregman near neighbors in sublinear time: beyond the triangle inequality

    Get PDF
    pre-printBregman divergences are important distance measures that are used extensively in data-driven applications such as computer vision, text mining, and speech processing, and are a key focus of interest in machine learning. Answering nearest neighbor (NN) queries under these measures is very important in these applications and has been the subject of extensive study, but is problematic because these distance measures lack metric properties like symmetry and the triangle inequality. In this paper, we present the first provably approximate nearest-neighbor (ANN) algorithms. These process queries in O(logn) time for Bregman divergences in fixed dimensional spaces. We also obtain polylogn bounds for a more abstract class of distance measures (containing Bregman divergences) which satisfy certain structural properties . Both of these bounds apply to both the regular asymmetric Bregman divergences as well as their symmetrized versions. To do so, we develop two geometric properties vital to our analysis: a reverse triangle inequality (RTI) and a relaxed triangle inequality called m-defectiveness where m is a domain-dependent parameter. Bregman divergences satisfy the RTI but not m-defectiveness. However, we show that the square root of a Bregman divergence does satisfy m-defectiveness. This allows us to then utilize both properties in an efficient search data structure that follows the general two-stage paradigm of a ring-tree decomposition followed by a quad tree search used in previous near-neighbor algorithms for Euclidean space and spaces of bounded doubling dimension. Our first algorithm resolves a query for a d-dimensional (1+e)-ANN in O ( logne )O(d) time and O (nlogd-1 n) space and holds for generic m-defective distance measures satisfying a RTI. Our second algorithm is more specific in analysis to the Bregman divergences and uses a further structural constant, the maximum ratio of second derivatives over each dimension of our domain (c0). This allows us to locate a (1+e)-ANN in O(logn) time and O(n) space, where there is a further (c0)d factor in the big-Oh for the query time

    On Approximating the Riemannian 1-Center

    No full text
    International audienceIn this paper, we generalize the simple Euclidean 1-center approximation algorithm of Badoiu and Clarkson (2003) to Riemannian geometries and study accordingly the convergence rate. We then show how to instantiate this generic algorithm to two particular cases: (1) hyperbolic geometry, and (2) Riemannian manifold of symmetric positive definite matrices
    • …
    corecore