249,555 research outputs found

    On Generalized Stam Inequalities and Fisher–Rényi Complexity Measures

    Get PDF
    Information-theoretic inequalities play a fundamental role in numerous scientific and technological areas (e.g., estimation and communication theories, signal and information processing, quantum physics, …) as they generally express the impossibility to have a complete description of a system via a finite number of information measures. In particular, they gave rise to the design of various quantifiers (statistical complexity measures) of the internal complexity of a (quantum) system. In this paper, we introduce a three-parametric Fisher–Rényi complexity, named ( p , β , λ ) -Fisher–Rényi complexity, based on both a two-parametic extension of the Fisher information and the Rényi entropies of a probability density function ρ characteristic of the system. This complexity measure quantifies the combined balance of the spreading and the gradient contents of ρ , and has the three main properties of a statistical complexity: the invariance under translation and scaling transformations, and a universal bounding from below. The latter is proved by generalizing the Stam inequality, which lowerbounds the product of the Shannon entropy power and the Fisher information of a probability density function. An extension of this inequality was already proposed by Bercher and Lutwak, a particular case of the general one, where the three parameters are linked, allowing to determine the sharp lower bound and the associated probability density with minimal complexity. Using the notion of differential-escort deformation, we are able to determine the sharp bound of the complexity measure even when the three parameters are decoupled (in a certain range). We determine as well the distribution that saturates the inequality: the ( p , β , λ ) -Gaussian distribution, which involves an inverse incomplete beta function. Finally, the complexity measure is calculated for various quantum-mechanical states of the harmonic and hydrogenic systems, which are the two main prototypes of physical systems subject to a central potential.The authors are very grateful to the CNRS (Steeve Zozor) and the Junta de Andalucía and the MINECO–FEDER under the grants FIS2014–54497 and FIS2014–59311P (Jesús Sánchez-Dehesa) for partial financial support

    Toward AUV Survey Design for Optimal Coverage and Localization Using the Cramer Rao Lower Bound

    Full text link
    This paper discusses an approach to using the Cramer Rao Lower Bound (CRLB) as a trajectory design tool for autonomous underwater vehicle (AUV) visual navigation. We begin with a discussion of Fisher Information as a measure of the lower bound of uncertainty in a simultaneous localization and mapping (SLAM) pose-graph. Treating the AUV trajectory as an non-random parameter, the Fisher information is calculated from the CRLB derivation, and depends only upon path geometry and sensor noise. The effect of the trajectory design parameters are evaluated by calculating the CRLB with different parameter sets. Next, optimal survey parameters are selected to improve the overall coverage rate while maintaining an acceptable level of localization precision for a fixed number of pose samples. The utility of the CRLB as a design tool in pre-planning an AUV survey is demonstrated using a synthetic data set for a boustrophedon survey. In this demonstration, we compare the CRLB of the improved survey plan with that of an actual previous hull-inspection survey plan of the USS Saratoga. Survey optimality is evaluated by measuring the overall coverage area and CRLB localization precision for a fixed number of nodes in the graph. We also examine how to exploit prior knowledge of environmental feature distribution in the survey plan.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/86049/1/akim-10.pd

    Asymmetry and tighter uncertainty relations for R\'enyi entropies via quantum-classical decompositions of resource measures

    Full text link
    It is known that the variance and entropy of quantum observables decompose into intrinsically quantum and classical contributions. Here a general method of constructing quantum-classical decompositions of resources such as uncertainty is discussed, with the quantum contribution specified by a measure of the noncommutativity of a given set of operators relative to the quantum state, and the classical contribution generated by the mixedness of the state. Suitable measures of noncommutativity or 'quantumness' include quantum Fisher information, and the asymmetry of a given set, group or algebra of operators, and are generalised to nonprojective observables and quantum channels. Strong entropic uncertainty relations and lower bounds for R\'enyi entropies are obtained, valid for arbitrary discrete observables, that take the mixedness of the state into account via a classical contribution to the lower bound. These relations can also be interpreted without reference to quantum-classical decompositions, as tradeoff relations that bound the asymmetry of one observable in terms of the entropy of another.Comment: v1: 12+3 pages, 0 figures, 1 bazinga! v2: comparison with classical lower bound for Renyi entropy added (including 1 Figure), plus minor corrections/clarification

    Quantifying non-Gaussianity for quantum information

    Get PDF
    We address the quantification of non-Gaussianity of states and operations in continuous-variable systems and its use in quantum information. We start by illustrating in details the properties and the relationships of two recently proposed measures of non-Gaussianity based on the Hilbert-Schmidt (HS) distance and the quantum relative entropy (QRE) between the state under examination and a reference Gaussian state. We then evaluate the non-Gaussianities of several families of non-Gaussian quantum states and show that the two measures have the same basic properties and also share the same qualitative behaviour on most of the examples taken into account. However, we also show that they introduce a different relation of order, i.e. they are not strictly monotone each other. We exploit the non-Gaussianity measures for states in order to introduce a measure of non-Gaussianity for quantum operations, to assess Gaussification and de-Gaussification protocols, and to investigate in details the role played by non-Gaussianity in entanglement distillation protocols. Besides, we exploit the QRE-based non-Gaussianity measure to provide new insight on the extremality of Gaussian states for some entropic quantities such as conditional entropy, mutual information and the Holevo bound. We also deal with parameter estimation and present a theorem connecting the QRE nonG to the quantum Fisher information. Finally, since evaluation of the QRE nonG measure requires the knowledge of the full density matrix, we derive some {\em experimentally friendly} lower bounds to nonG for some class of states and by considering the possibility to perform on the states only certain efficient or inefficient measurements.Comment: 22 pages, 13 figures, comments welcome. v2: typos corrected and references added. v3: minor corrections (more similar to published version

    Information-theoretic analysis of Hierarchical Temporal Memory-Spatial Pooler algorithm with a new upper bound for the standard information bottleneck method

    Get PDF
    Hierarchical Temporal Memory (HTM) is an unsupervised algorithm in machine learning. It models several fundamental neocortical computational principles. Spatial Pooler (SP) is one of the main components of the HTM, which continuously encodes streams of binary input from various layers and regions into sparse distributed representations. In this paper, the goal is to evaluate the sparsification in the SP algorithm from the perspective of information theory by the information bottleneck (IB), Cramer-Rao lower bound, and Fisher information matrix. This paper makes two main contributions. First, we introduce a new upper bound for the standard information bottleneck relation, which we refer to as modified-IB in this paper. This measure is used to evaluate the performance of the SP algorithm in different sparsity levels and various amounts of noise. The MNIST, Fashion-MNIST and NYC-Taxi datasets were fed to the SP algorithm separately. The SP algorithm with learning was found to be resistant to noise. Adding up to 40% noise to the input resulted in no discernible change in the output. Using the probabilistic mapping method and Hidden Markov Model, the sparse SP output representation was reconstructed in the input space. In the modified-IB relation, it is numerically calculated that a lower noise level and a higher sparsity level in the SP algorithm lead to a more effective reconstruction and SP with 2% sparsity produces the best results. Our second contribution is to prove mathematically that more sparsity leads to better performance of the SP algorithm. The data distribution was considered the Cauchy distribution, and the Cramer–Rao lower bound was analyzed to estimate SP’s output at different sparsity levels
    corecore