3,074 research outputs found
Information Entropy in Cosmology
The effective evolution of an inhomogeneous cosmological model may be
described in terms of spatially averaged variables. We point out that in this
context, quite naturally, a measure arises which is identical to a fluid model
of the `Kullback-Leibler Relative Information Entropy', expressing the
distinguishability of the local inhomogeneous mass density field from its
spatial average on arbitrary compact domains. We discuss the time-evolution of
`effective information' and explore some implications. We conjecture that the
information content of the Universe -- measured by Relative Information Entropy
of a cosmological model containing dust matter -- is increasing.Comment: LateX, PRLstyle, 4 pages; to appear in PR
How `hot' are mixed quantum states?
Given a mixed quantum state of a qudit, we consider any observable
as a kind of `thermometer' in the following sense. Given a source which emits
pure states with these or those distributions, we select such distributions
that the appropriate average value of the observable is equal to the
average Tr of in the stare . Among those distributions we find
the most typical one, namely, having the highest differential entropy. We call
this distribution conditional Gibbs ensemble as it turns out to be a Gibbs
distribution characterized by a temperature-like parameter . The
expressions establishing the liaisons between the density operator and
its temperature parameter are provided. Within this approach, the
uniform mixed state has the highest `temperature', which tends to zero as the
state in question approaches to a pure state.Comment: Contribution to Quantum 2006: III workshop ad memoriam of Carlo
Novero: Advances in Foundations of Quantum Mechanics and Quantum Information
with atoms and photons. 2-5 May 2006 - Turin, Ital
zCap: a zero configuration adaptive paging and mobility management mechanism
Today, cellular networks rely on fixed collections of cells (tracking areas) for user equipment localisation. Locating users within these areas involves broadcast search (paging), which consumes radio bandwidth but reduces the user equipment signalling required for mobility management. Tracking areas are today manually configured, hard to adapt to local mobility and influence the load on several key resources in the network. We propose a decentralised and self-adaptive approach to mobility management based on a probabilistic model of local mobility. By estimating the parameters of this model from observations of user mobility collected online, we obtain a dynamic model from which we construct local neighbourhoods of cells where we are most likely to locate user equipment. We propose to replace the static tracking areas of current systems with neighbourhoods local to each cell. The model is also used to derive a multi-phase paging scheme, where the division of neighbourhood cells into consecutive phases balances response times and paging cost. The complete mechanism requires no manual tracking area configuration and performs localisation efficiently in terms of signalling and response times. Detailed simulations show that significant potential gains in localisation effi- ciency are possible while eliminating manual configuration of mobility management parameters. Variants of the proposal can be implemented within current (LTE) standards
Gradient Flows in Filtering and Fisher-Rao Geometry
Uncertainty propagation and filtering can be interpreted as gradient flows
with respect to suitable metrics in the infinite dimensional manifold of
probability density functions. Such a viewpoint has been put forth in recent
literature, and a systematic way to formulate and solve the same for linear
Gaussian systems has appeared in our previous work where the gradient flows
were realized via proximal operators with respect to Wasserstein metric arising
in optimal mass transport. In this paper, we derive the evolution equations as
proximal operators with respect to Fisher-Rao metric arising in information
geometry. We develop the linear Gaussian case in detail and show that a
template two step optimization procedure proposed earlier by the authors still
applies. Our objective is to provide new geometric interpretations of known
equations in filtering, and to clarify the implication of different choices of
metric
Compressing Probability Distributions
We show how to store good approximations of probability distributions in
small space
Comparing compact binary parameter distributions I: Methods
Being able to measure each merger's sky location, distance, component masses,
and conceivably spins, ground-based gravitational-wave detectors will provide a
extensive and detailed sample of coalescing compact binaries (CCBs) in the
local and, with third-generation detectors, distant universe. These
measurements will distinguish between competing progenitor formation models. In
this paper we develop practical tools to characterize the amount of
experimentally accessible information available, to distinguish between two a
priori progenitor models. Using a simple time-independent model, we demonstrate
the information content scales strongly with the number of observations. The
exact scaling depends on how significantly mass distributions change between
similar models. We develop phenomenological diagnostics to estimate how many
models can be distinguished, using first-generation and future instruments.
Finally, we emphasize that multi-observable distributions can be fully
exploited only with very precisely calibrated detectors, search pipelines,
parameter estimation, and Bayesian model inference
Riemannian Walk for Incremental Learning: Understanding Forgetting and Intransigence
Incremental learning (IL) has received a lot of attention recently, however,
the literature lacks a precise problem definition, proper evaluation settings,
and metrics tailored specifically for the IL problem. One of the main
objectives of this work is to fill these gaps so as to provide a common ground
for better understanding of IL. The main challenge for an IL algorithm is to
update the classifier whilst preserving existing knowledge. We observe that, in
addition to forgetting, a known issue while preserving knowledge, IL also
suffers from a problem we call intransigence, inability of a model to update
its knowledge. We introduce two metrics to quantify forgetting and
intransigence that allow us to understand, analyse, and gain better insights
into the behaviour of IL algorithms. We present RWalk, a generalization of
EWC++ (our efficient version of EWC [Kirkpatrick2016EWC]) and Path Integral
[Zenke2017Continual] with a theoretically grounded KL-divergence based
perspective. We provide a thorough analysis of various IL algorithms on MNIST
and CIFAR-100 datasets. In these experiments, RWalk obtains superior results in
terms of accuracy, and also provides a better trade-off between forgetting and
intransigence
On accuracy of PDF divergence estimators and their applicability to representative data sampling
Generalisation error estimation is an important issue in machine learning. Cross-validation traditionally used for this purpose requires building multiple models and repeating the whole procedure many times in order to produce reliable error estimates. It is however possible to accurately estimate the error using only a single model, if the training and test data are chosen appropriately. This paper investigates the possibility of using various probability density function divergence measures for the purpose of representative data sampling. As it turned out, the first difficulty one needs to deal with is estimation of the divergence itself. In contrast to other publications on this subject, the experimental results provided in this study show that in many cases it is not possible unless samples consisting of thousands of instances are used. Exhaustive experiments on the divergence guided representative data sampling have been performed using 26 publicly available benchmark datasets and 70 PDF divergence estimators, and their results have been analysed and discussed
Universality of optimal measurements
We present optimal and minimal measurements on identical copies of an unknown
state of a qubit when the quality of measuring strategies is quantified with
the gain of information (Kullback of probability distributions). We also show
that the maximal gain of information occurs, among isotropic priors, when the
state is known to be pure. Universality of optimal measurements follows from
our results: using the fidelity or the gain of information, two different
figures of merits, leads to exactly the same conclusions. We finally
investigate the optimal capacity of copies of an unknown state as a quantum
channel of information.Comment: Revtex, 5 pages, no figure
Quantum estimation via minimum Kullback entropy principle
We address quantum estimation in situations where one has at disposal data
from the measurement of an incomplete set of observables and some a priori
information on the state itself. By expressing the a priori information in
terms of a bias toward a given state the problem may be faced by minimizing the
quantum relative entropy (Kullback entropy) with the constraint of reproducing
the data. We exploit the resulting minimum Kullback entropy principle for the
estimation of a quantum state from the measurement of a single observable,
either from the sole mean value or from the complete probability distribution,
and apply it as a tool for the estimation of weak Hamiltonian processes. Qubit
and harmonic oscillator systems are analyzed in some details.Comment: 7 pages, slightly revised version, no figure
- …