1,612 research outputs found

    Vector-Quantization by density matching in the minimum Kullback-Leibler divergence sense

    Get PDF
    Abstract- Representation of a large set of bigh-dimensional data is a fundamental problem in many applications such as communications and biomedical systems. The problem has been tackled by encoding the data with a compact set of code-vectors called processing elements. In this study, we propose a vector quantization technique that encodes the information in the data using concepts derived from information theoretic learning. The algorithm minimizes a cost function based on the Kullback-Liebler divergence to match the distribution of the processing elements with the distribution of the data. The performance of this algorithm is demonstrated on synthetic data as well as on an edge-image of a face. Comparisons are provided with some of the existing algorithms such as LEG and SOM. I

    Quantum State Tomography of a Single Qubit: Comparison of Methods

    Full text link
    The tomographic reconstruction of the state of a quantum-mechanical system is an essential component in the development of quantum technologies. We present an overview of different tomographic methods for determining the quantum-mechanical density matrix of a single qubit: (scaled) direct inversion, maximum likelihood estimation (MLE), minimum Fisher information distance, and Bayesian mean estimation (BME). We discuss the different prior densities in the space of density matrices, on which both MLE and BME depend, as well as ways of including experimental errors and of estimating tomography errors. As a measure of the accuracy of these methods we average the trace distance between a given density matrix and the tomographic density matrices it can give rise to through experimental measurements. We find that the BME provides the most accurate estimate of the density matrix, and suggest using either the pure-state prior, if the system is known to be in a rather pure state, or the Bures prior if any state is possible. The MLE is found to be slightly less accurate. We comment on the extrapolation of these results to larger systems.Comment: 15 pages, 4 figures, 2 tables; replaced previous figure 5 by new table I. in Journal of Modern Optics, 201

    Reduction of Markov Chains using a Value-of-Information-Based Approach

    Full text link
    In this paper, we propose an approach to obtain reduced-order models of Markov chains. Our approach is composed of two information-theoretic processes. The first is a means of comparing pairs of stationary chains on different state spaces, which is done via the negative Kullback-Leibler divergence defined on a model joint space. Model reduction is achieved by solving a value-of-information criterion with respect to this divergence. Optimizing the criterion leads to a probabilistic partitioning of the states in the high-order Markov chain. A single free parameter that emerges through the optimization process dictates both the partition uncertainty and the number of state groups. We provide a data-driven means of choosing the `optimal' value of this free parameter, which sidesteps needing to a priori know the number of state groups in an arbitrary chain.Comment: Submitted to Entrop

    Stochastic Attraction-Repulsion Embedding for Large Scale Image Localization

    Full text link
    This paper tackles the problem of large-scale image-based localization (IBL) where the spatial location of a query image is determined by finding out the most similar reference images in a large database. For solving this problem, a critical task is to learn discriminative image representation that captures informative information relevant for localization. We propose a novel representation learning method having higher location-discriminating power. It provides the following contributions: 1) we represent a place (location) as a set of exemplar images depicting the same landmarks and aim to maximize similarities among intra-place images while minimizing similarities among inter-place images; 2) we model a similarity measure as a probability distribution on L_2-metric distances between intra-place and inter-place image representations; 3) we propose a new Stochastic Attraction and Repulsion Embedding (SARE) loss function minimizing the KL divergence between the learned and the actual probability distributions; 4) we give theoretical comparisons between SARE, triplet ranking and contrastive losses. It provides insights into why SARE is better by analyzing gradients. Our SARE loss is easy to implement and pluggable to any CNN. Experiments show that our proposed method improves the localization performance on standard benchmarks by a large margin. Demonstrating the broad applicability of our method, we obtained the third place out of 209 teams in the 2018 Google Landmark Retrieval Challenge. Our code and model are available at https://github.com/Liumouliu/deepIBL.Comment: ICC
    • …
    corecore