3,777 research outputs found
Scanning and Sequential Decision Making for Multi-Dimensional Data - Part I: the Noiseless Case
We investigate the problem of scanning and prediction ("scandiction", for
short) of multidimensional data arrays. This problem arises in several aspects
of image and video processing, such as predictive coding, for example, where an
image is compressed by coding the error sequence resulting from scandicting it.
Thus, it is natural to ask what is the optimal method to scan and predict a
given image, what is the resulting minimum prediction loss, and whether there
exist specific scandiction schemes which are universal in some sense.
Specifically, we investigate the following problems: First, modeling the data
array as a random field, we wish to examine whether there exists a scandiction
scheme which is independent of the field's distribution, yet asymptotically
achieves the same performance as if this distribution was known. This question
is answered in the affirmative for the set of all spatially stationary random
fields and under mild conditions on the loss function. We then discuss the
scenario where a non-optimal scanning order is used, yet accompanied by an
optimal predictor, and derive bounds on the excess loss compared to optimal
scanning and prediction.
This paper is the first part of a two-part paper on sequential decision
making for multi-dimensional data. It deals with clean, noiseless data arrays.
The second part deals with noisy data arrays, namely, with the case where the
decision maker observes only a noisy version of the data, yet it is judged with
respect to the original, clean data.Comment: 46 pages, 2 figures. Revised version: title changed, section 1
revised, section 3.1 added, a few minor/technical corrections mad
Universal Compression of Power-Law Distributions
English words and the outputs of many other natural processes are well-known
to follow a Zipf distribution. Yet this thoroughly-established property has
never been shown to help compress or predict these important processes. We show
that the expected redundancy of Zipf distributions of order is
roughly the power of the expected redundancy of unrestricted
distributions. Hence for these orders, Zipf distributions can be better
compressed and predicted than was previously known. Unlike the expected case,
we show that worst-case redundancy is roughly the same for Zipf and for
unrestricted distributions. Hence Zipf distributions have significantly
different worst-case and expected redundancies, making them the first natural
distribution class shown to have such a difference.Comment: 20 page
Universal Coding and Prediction on Martin-L\"of Random Points
We perform an effectivization of classical results concerning universal
coding and prediction for stationary ergodic processes over an arbitrary finite
alphabet. That is, we lift the well-known almost sure statements to statements
about Martin-L\"of random sequences. Most of this work is quite mechanical but,
by the way, we complete a result of Ryabko from 2008 by showing that each
universal probability measure in the sense of universal coding induces a
universal predictor in the prequential sense. Surprisingly, the effectivization
of this implication holds true provided the universal measure does not ascribe
too low conditional probabilities to individual symbols. As an example, we show
that the Prediction by Partial Matching (PPM) measure satisfies this
requirement. In the almost sure setting, the requirement is superfluous.Comment: 12 page
Minimum Description Length Induction, Bayesianism, and Kolmogorov Complexity
The relationship between the Bayesian approach and the minimum description
length approach is established. We sharpen and clarify the general modeling
principles MDL and MML, abstracted as the ideal MDL principle and defined from
Bayes's rule by means of Kolmogorov complexity. The basic condition under which
the ideal principle should be applied is encapsulated as the Fundamental
Inequality, which in broad terms states that the principle is valid when the
data are random, relative to every contemplated hypothesis and also these
hypotheses are random relative to the (universal) prior. Basically, the ideal
principle states that the prior probability associated with the hypothesis
should be given by the algorithmic universal probability, and the sum of the
log universal probability of the model plus the log of the probability of the
data given the model should be minimized. If we restrict the model class to the
finite sets then application of the ideal principle turns into Kolmogorov's
minimal sufficient statistic. In general we show that data compression is
almost always the best strategy, both in hypothesis identification and
prediction.Comment: 35 pages, Latex. Submitted IEEE Trans. Inform. Theor
Universal Codes from Switching Strategies
We discuss algorithms for combining sequential prediction strategies, a task
which can be viewed as a natural generalisation of the concept of universal
coding. We describe a graphical language based on Hidden Markov Models for
defining prediction strategies, and we provide both existing and new models as
examples. The models include efficient, parameterless models for switching
between the input strategies over time, including a model for the case where
switches tend to occur in clusters, and finally a new model for the scenario
where the prediction strategies have a known relationship, and where jumps are
typically between strongly related ones. This last model is relevant for coding
time series data where parameter drift is expected. As theoretical ontributions
we introduce an interpolation construction that is useful in the development
and analysis of new algorithms, and we establish a new sophisticated lemma for
analysing the individual sequence regret of parameterised models
- …