Since the early days of digital communication, hidden Markov models (HMMs)
have now been also routinely used in speech recognition, processing of natural
languages, images, and in bioinformatics. In an HMM (Xiβ,Yiβ)iβ₯1β,
observations X1β,X2β,... are assumed to be conditionally independent given an
``explanatory'' Markov process Y1β,Y2β,..., which itself is not observed;
moreover, the conditional distribution of Xiβ depends solely on Yiβ.
Central to the theory and applications of HMM is the Viterbi algorithm to find
{\em a maximum a posteriori} (MAP) estimate q1:nβ=(q1β,q2β,...,qnβ) of
Y1:nβ given observed data x1:nβ. Maximum {\em a posteriori} paths are
also known as Viterbi paths or alignments. Recently, attempts have been made to
study the behavior of Viterbi alignments when nββ. Thus, it has been
shown that in some special cases a well-defined limiting Viterbi alignment
exists. While innovative, these attempts have relied on rather strong
assumptions and involved proofs which are existential. This work proves the
existence of infinite Viterbi alignments in a more constructive manner and for
a very general class of HMMs.Comment: Submitted to the IEEE Transactions on Information Theory, focuses on
the proofs of the results presented in arXiv:0709.2317, and arXiv:0803.239