108,642 research outputs found
Uniform test of algorithmic randomness over a general space
The algorithmic theory of randomness is well developed when the underlying
space is the set of finite or infinite sequences and the underlying probability
distribution is the uniform distribution or a computable distribution. These
restrictions seem artificial. Some progress has been made to extend the theory
to arbitrary Bernoulli distributions (by Martin-Loef), and to arbitrary
distributions (by Levin). We recall the main ideas and problems of Levin's
theory, and report further progress in the same framework.
- We allow non-compact spaces (like the space of continuous functions,
underlying the Brownian motion).
- The uniform test (deficiency of randomness) d_P(x) (depending both on the
outcome x and the measure P should be defined in a general and natural way.
- We see which of the old results survive: existence of universal tests,
conservation of randomness, expression of tests in terms of description
complexity, existence of a universal measure, expression of mutual information
as "deficiency of independence.
- The negative of the new randomness test is shown to be a generalization of
complexity in continuous spaces; we show that the addition theorem survives.
The paper's main contribution is introducing an appropriate framework for
studying these questions and related ones (like statistics for a general family
of distributions).Comment: 40 pages. Journal reference and a slight correction in the proof of
Theorem 7 adde
Kullback--Leibler approximation for probability measures on infinite dimensional spaces
In a variety of applications it is important to extract information from a probability measure on an infinite dimensional space. Examples include the Bayesian approach to inverse problems and (possibly conditioned) continuous time Markov processes. It may then be of interest to find a measure , from within a simple class of measures, which approximates . This problem is studied in the case where the Kullback--Leibler divergence is employed to measure the quality of the approximation. A calculus of variations viewpoint is adopted, and the particular case where is chosen from the set of Gaussian measures is studied in detail. Basic existence and uniqueness theorems are established, together with properties of minimizing sequences. Furthermore, parameterization of the class of Gaussians through the mean and inverse covariance is introduced, the need for regularization is explained, and a regularized minimization is studied in detail. The calculus of variations framework resulting from this work provides the appropriate underpinning for computational algorithms
Kullback--Leibler approximation for probability measures on infinite dimensional spaces
In a variety of applications it is important to extract information from a probability measure on an infinite dimensional space. Examples include the Bayesian approach to inverse problems and (possibly conditioned) continuous time Markov processes. It may then be of interest to find a measure , from within a simple class of measures, which approximates . This problem is studied in the case where the Kullback--Leibler divergence is employed to measure the quality of the approximation. A calculus of variations viewpoint is adopted, and the particular case where is chosen from the set of Gaussian measures is studied in detail. Basic existence and uniqueness theorems are established, together with properties of minimizing sequences. Furthermore, parameterization of the class of Gaussians through the mean and inverse covariance is introduced, the need for regularization is explained, and a regularized minimization is studied in detail. The calculus of variations framework resulting from this work provides the appropriate underpinning for computational algorithms
- …