126,913 research outputs found

### Efficient computation of updated lower expectations for imprecise continuous-time hidden Markov chains

We consider the problem of performing inference with imprecise
continuous-time hidden Markov chains, that is, imprecise continuous-time Markov
chains that are augmented with random output variables whose distribution
depends on the hidden state of the chain. The prefix `imprecise' refers to the
fact that we do not consider a classical continuous-time Markov chain, but
replace it with a robust extension that allows us to represent various types of
model uncertainty, using the theory of imprecise probabilities. The inference
problem amounts to computing lower expectations of functions on the state-space
of the chain, given observations of the output variables. We develop and
investigate this problem with very few assumptions on the output variables; in
particular, they can be chosen to be either discrete or continuous random
variables. Our main result is a polynomial runtime algorithm to compute the
lower expectation of functions on the state-space at any given time-point,
given a collection of observations of the output variables

### A scaling analysis of a cat and mouse Markov chain

Motivated by an original on-line page-ranking algorithm, starting from an arbitrary Markov chain $(C_n)$ on a discrete state space ${\cal S}$, a Markov chain $(C_n,M_n)$ on the product space ${\cal S}^2$, the cat and mouse Markov chain, is constructed. The first coordinate of this Markov chain behaves like the original Markov chain and the second component changes only when both coordinates are equal. The asymptotic properties of this Markov chain are investigated. A representation of its invariant measure is in particular obtained. When the state space is infinite it is shown that this Markov chain is in fact null recurrent if the initial Markov chain $(C_n)$ is positive recurrent and reversible. In this context, the scaling properties of the location of the second component, the mouse, are investigated in various situations: simple random walks in $\mathbb{Z}$ and $\mathbb{Z}^2$, reflected simple random walk in $\mathbb{N}$ and also in a continuous time setting. For several of these processes, a time scaling with rapid growth gives an interesting asymptotic behavior related to limit results for occupation times and rare events of Markov processes.\u

### Integrating Specialized Classifiers Based on Continuous Time Markov Chain

Specialized classifiers, namely those dedicated to a subset of classes, are
often adopted in real-world recognition systems. However, integrating such
classifiers is nontrivial. Existing methods, e.g. weighted average, usually
implicitly assume that all constituents of an ensemble cover the same set of
classes. Such methods can produce misleading predictions when used to combine
specialized classifiers. This work explores a novel approach. Instead of
combining predictions from individual classifiers directly, it first decomposes
the predictions into sets of pairwise preferences, treating them as transition
channels between classes, and thereon constructs a continuous-time Markov
chain, and use the equilibrium distribution of this chain as the final
prediction. This way allows us to form a coherent picture over all specialized
predictions. On large public datasets, the proposed method obtains considerable
improvement compared to mainstream ensemble methods, especially when the
classifier coverage is highly unbalanced.Comment: Published at IJCAI-17, typo fixe

- …