83,107 research outputs found
Hidden Markov Model Identifiability via Tensors
The prevalence of hidden Markov models (HMMs) in various applications of
statistical signal processing and communications is a testament to the power
and flexibility of the model. In this paper, we link the identifiability
problem with tensor decomposition, in particular, the Canonical Polyadic
decomposition. Using recent results in deriving uniqueness conditions for
tensor decomposition, we are able to provide a necessary and sufficient
condition for the identification of the parameters of discrete time finite
alphabet HMMs. This result resolves a long standing open problem regarding the
derivation of a necessary and sufficient condition for uniquely identifying an
HMM. We then further extend recent preliminary work on the identification of
HMMs with multiple observers by deriving necessary and sufficient conditions
for identifiability in this setting.Comment: Accepted to ISIT 2013. 5 pages, no figure
Nonuniform Markov models
A statistical language model assigns probability to strings of arbitrary
length. Unfortunately, it is not possible to gather reliable statistics on
strings of arbitrary length from a finite corpus. Therefore, a statistical
language model must decide that each symbol in a string depends on at most a
small, finite number of other symbols in the string. In this report we propose
a new way to model conditional independence in Markov models. The central
feature of our nonuniform Markov model is that it makes predictions of varying
lengths using contexts of varying lengths. Experiments on the Wall Street
Journal reveal that the nonuniform model performs slightly better than the
classic interpolated Markov model. This result is somewhat remarkable because
both models contain identical numbers of parameters whose values are estimated
in a similar manner. The only difference between the two models is how they
combine the statistics of longer and shorter strings.
Keywords: nonuniform Markov model, interpolated Markov model, conditional
independence, statistical language model, discrete time series.Comment: 17 page
Extending dynamic segmentation with lead generation: A latent class Markov analysis of financial product portfolios
A recent development in marketing research concerns the incorporation of dynamics in consumer segmentation.This paper extends the latent class Markov model, a suitable technique for conducting dynamic segmentation, in order to facilitate lead generation.We demonstrate the application of the latent Markov model for these purposes using a database containing information on the ownership of twelve financial products and demographics for explaining (changes in) consumer product portfolios.Data were collected in four bi-yearly measurement waves in which a total of 7676 households participated.The proposed latent class Markov model defines dynamic segments on the basis of consumer product portfolios and shows the relationship between the dynamic segments and demographics.The paper demonstrates that the dynamic segmentation resulting from the latent class Markov model is applicable for lead generation.market segmentation;Markov chains;marketing;demography;measurement
A semi-Markov model for price returns
We study the high frequency price dynamics of traded stocks by a model of
returns using a semi-Markov approach. More precisely we assume that the
intraday return are described by a discrete time homogeneous semi-Markov
process and the overnight returns are modeled by a Markov chain. Based on this
assumptions we derived the equations for the first passage time distribution
and the volatility autocorreletion function. Theoretical results have been
compared with empirical findings from real data. In particular we analyzed high
frequency data from the Italian stock market from first of January 2007 until
end of December 2010. The semi-Markov hypothesis is also tested through a
nonparametric test of hypothesis
- âŠ