2 research outputs found
On the optimality of universal classifiers for finite-length individual test sequences
We consider pairs of finite-length individual sequences that are realizations
of unknown, finite alphabet, stationary sources in a clas M of sources with
vanishing memory (e.g. stationary Markov sources).
The task of a universal classifier is to decide whether the two sequences are
emerging from the same source or are emerging from two distinct sources in M,
and it has to carry this task without any prior knowledge of the two underlying
probability measures.
Given a fidelity function and a fidelity criterion, the probability of
classification error for a given universal classifier is defined.
Two universal classifiers are defined for pairs of -sequence: A
"classical" fixed-length (FL) universal classifier and an alternative
variable-length (VL) universal classifier.
Following Wyner and Ziv (1996) it is demonstrated that if the length of the
individual sequences N is smaller than a cut-off value that is determined by
the properties of the class M, any universal classifier will fail with high
probability .
It is demonstrated that for values of N larger than the cut-off rate, the
classification error relative to either one of the two classifiers tends to
zero as the length of the sequences tends to infinity.
However, the probability of classification error that is associated with the
variable-length universal classifier is uniformly smaller (or equal) to the one
that is associated with the "classical" fixed-length universal classifier, for
any finite length
On Finite Memory Universal Data Compression and Classification of Individual Sequences
Consider the case where consecutive blocks of N letters of a semi-infinite
individual sequence X over a finite-alphabet are being compressed into binary
sequences by some one-to-one mapping. No a-priori information about X is
available at the encoder, which must therefore adopt a universal
data-compression algorithm. It is known that if the universal LZ77 data
compression algorithm is successively applied to N-blocks then the best
error-free compression for the particular individual sequence X is achieved, as
tends to infinity. The best possible compression that may be achieved by
any universal data compression algorithm for finite N-blocks is discussed. It
is demonstrated that context tree coding essentially achieves it. Next,
consider a device called classifier (or discriminator) that observes an
individual training sequence X. The classifier's task is to examine individual
test sequences of length N and decide whether the test N-sequence has the same
features as those that are captured by the training sequence X, or is
sufficiently different, according to some appropriatecriterion. Here again, it
is demonstrated that a particular universal context classifier with a
storage-space complexity that is linear in N, is essentially optimal. This may
contribute a theoretical "individual sequence" justification for the
Probabilistic Suffix Tree (PST) approach in learning theory and in
computational biology.Comment: The manuscrip was errneously replaced by a different one on a
differnt topic, thus erasing the oricinal manuscrip