1,536 research outputs found

    Information Entropy in Cosmology

    Full text link
    The effective evolution of an inhomogeneous cosmological model may be described in terms of spatially averaged variables. We point out that in this context, quite naturally, a measure arises which is identical to a fluid model of the `Kullback-Leibler Relative Information Entropy', expressing the distinguishability of the local inhomogeneous mass density field from its spatial average on arbitrary compact domains. We discuss the time-evolution of `effective information' and explore some implications. We conjecture that the information content of the Universe -- measured by Relative Information Entropy of a cosmological model containing dust matter -- is increasing.Comment: LateX, PRLstyle, 4 pages; to appear in PR

    How `hot' are mixed quantum states?

    Get PDF
    Given a mixed quantum state ρ\rho of a qudit, we consider any observable MM as a kind of `thermometer' in the following sense. Given a source which emits pure states with these or those distributions, we select such distributions that the appropriate average value of the observable MM is equal to the average TrMρM\rho of MM in the stare ρ\rho. Among those distributions we find the most typical one, namely, having the highest differential entropy. We call this distribution conditional Gibbs ensemble as it turns out to be a Gibbs distribution characterized by a temperature-like parameter ÎČ\beta. The expressions establishing the liaisons between the density operator ρ\rho and its temperature parameter ÎČ\beta are provided. Within this approach, the uniform mixed state has the highest `temperature', which tends to zero as the state in question approaches to a pure state.Comment: Contribution to Quantum 2006: III workshop ad memoriam of Carlo Novero: Advances in Foundations of Quantum Mechanics and Quantum Information with atoms and photons. 2-5 May 2006 - Turin, Ital

    Universality of optimal measurements

    Get PDF
    We present optimal and minimal measurements on identical copies of an unknown state of a qubit when the quality of measuring strategies is quantified with the gain of information (Kullback of probability distributions). We also show that the maximal gain of information occurs, among isotropic priors, when the state is known to be pure. Universality of optimal measurements follows from our results: using the fidelity or the gain of information, two different figures of merits, leads to exactly the same conclusions. We finally investigate the optimal capacity of NN copies of an unknown state as a quantum channel of information.Comment: Revtex, 5 pages, no figure

    Riemannian Walk for Incremental Learning: Understanding Forgetting and Intransigence

    Get PDF
    Incremental learning (IL) has received a lot of attention recently, however, the literature lacks a precise problem definition, proper evaluation settings, and metrics tailored specifically for the IL problem. One of the main objectives of this work is to fill these gaps so as to provide a common ground for better understanding of IL. The main challenge for an IL algorithm is to update the classifier whilst preserving existing knowledge. We observe that, in addition to forgetting, a known issue while preserving knowledge, IL also suffers from a problem we call intransigence, inability of a model to update its knowledge. We introduce two metrics to quantify forgetting and intransigence that allow us to understand, analyse, and gain better insights into the behaviour of IL algorithms. We present RWalk, a generalization of EWC++ (our efficient version of EWC [Kirkpatrick2016EWC]) and Path Integral [Zenke2017Continual] with a theoretically grounded KL-divergence based perspective. We provide a thorough analysis of various IL algorithms on MNIST and CIFAR-100 datasets. In these experiments, RWalk obtains superior results in terms of accuracy, and also provides a better trade-off between forgetting and intransigence

    Comparing compact binary parameter distributions I: Methods

    Full text link
    Being able to measure each merger's sky location, distance, component masses, and conceivably spins, ground-based gravitational-wave detectors will provide a extensive and detailed sample of coalescing compact binaries (CCBs) in the local and, with third-generation detectors, distant universe. These measurements will distinguish between competing progenitor formation models. In this paper we develop practical tools to characterize the amount of experimentally accessible information available, to distinguish between two a priori progenitor models. Using a simple time-independent model, we demonstrate the information content scales strongly with the number of observations. The exact scaling depends on how significantly mass distributions change between similar models. We develop phenomenological diagnostics to estimate how many models can be distinguished, using first-generation and future instruments. Finally, we emphasize that multi-observable distributions can be fully exploited only with very precisely calibrated detectors, search pipelines, parameter estimation, and Bayesian model inference

    Quantifying the complexity of random Boolean networks

    Full text link
    We study two measures of the complexity of heterogeneous extended systems, taking random Boolean networks as prototypical cases. A measure defined by Shalizi et al. for cellular automata, based on a criterion for optimal statistical prediction [Shalizi et al., Phys. Rev. Lett. 93, 118701 (2004)], does not distinguish between the spatial inhomogeneity of the ordered phase and the dynamical inhomogeneity of the disordered phase. A modification in which complexities of individual nodes are calculated yields vanishing complexity values for networks in the ordered and critical regimes and for highly disordered networks, peaking somewhere in the disordered regime. Individual nodes with high complexity are the ones that pass the most information from the past to the future, a quantity that depends in a nontrivial way on both the Boolean function of a given node and its location within the network.Comment: 8 pages, 4 figure

    Quantum estimation via minimum Kullback entropy principle

    Full text link
    We address quantum estimation in situations where one has at disposal data from the measurement of an incomplete set of observables and some a priori information on the state itself. By expressing the a priori information in terms of a bias toward a given state the problem may be faced by minimizing the quantum relative entropy (Kullback entropy) with the constraint of reproducing the data. We exploit the resulting minimum Kullback entropy principle for the estimation of a quantum state from the measurement of a single observable, either from the sole mean value or from the complete probability distribution, and apply it as a tool for the estimation of weak Hamiltonian processes. Qubit and harmonic oscillator systems are analyzed in some details.Comment: 7 pages, slightly revised version, no figure

    Complexity Measures from Interaction Structures

    Full text link
    We evaluate new complexity measures on the symbolic dynamics of coupled tent maps and cellular automata. These measures quantify complexity in terms of kk-th order statistical dependencies that cannot be reduced to interactions between k−1k-1 units. We demonstrate that these measures are able to identify complex dynamical regimes.Comment: 11 pages, figures improved, minor changes to the tex

    Pairwise Confusion for Fine-Grained Visual Classification

    Full text link
    Fine-Grained Visual Classification (FGVC) datasets contain small sample sizes, along with significant intra-class variation and inter-class similarity. While prior work has addressed intra-class variation using localization and segmentation techniques, inter-class similarity may also affect feature learning and reduce classification performance. In this work, we address this problem using a novel optimization procedure for the end-to-end neural network training on FGVC tasks. Our procedure, called Pairwise Confusion (PC) reduces overfitting by intentionally {introducing confusion} in the activations. With PC regularization, we obtain state-of-the-art performance on six of the most widely-used FGVC datasets and demonstrate improved localization ability. {PC} is easy to implement, does not need excessive hyperparameter tuning during training, and does not add significant overhead during test time.Comment: Camera-Ready version for ECCV 201
    • 

    corecore