36,965 research outputs found
Risk-sensitive filtering and smoothing for hidden Markov models
In this paper, we address the problem of risk-sensitive filtering and smoothing for discrete-time Hidden Markov Models (HMM) with finite-discrete states. The objective of risk-sensitive filtering is to minimise the expectation of the exponential of the squared estimation error weighted by a risk-sensitive parameter. We use the so-called Reference Probability Method in solving this problem. We achieve finite-dimensional linear recursions in the information state, and thereby the state estimate that minimises the risk-sensitive cost index. Also, fixed-interval smoothing results are derived. We show that L2 or risk-neutral filtering for HMMs can be extracted as a limiting case of the risk-sensitive filtering problem when the risk-sensitive parameter approaches zero
HMM based scenario generation for an investment optimisation problem
This is the post-print version of the article. The official published version can be accessed from the link below - Copyright @ 2012 Springer-Verlag.The Geometric Brownian motion (GBM) is a standard method for modelling financial time series. An important criticism of this method is that the parameters of the GBM are assumed to be constants; due to this fact, important features of the time series, like extreme behaviour or volatility clustering cannot be captured. We propose an approach by which the parameters of the GBM are able to switch between regimes, more precisely they are governed by a hidden Markov chain. Thus, we model the financial time series via a hidden Markov model (HMM) with a GBM in each state. Using this approach, we generate scenarios for a financial portfolio optimisation problem in which the portfolio CVaR is minimised. Numerical results are presented.This study was funded by NET ACE at OptiRisk Systems
Handling non-ignorable dropouts in longitudinal data: A conditional model based on a latent Markov heterogeneity structure
We illustrate a class of conditional models for the analysis of longitudinal
data suffering attrition in random effects models framework, where the
subject-specific random effects are assumed to be discrete and to follow a
time-dependent latent process. The latent process accounts for unobserved
heterogeneity and correlation between individuals in a dynamic fashion, and for
dependence between the observed process and the missing data mechanism. Of
particular interest is the case where the missing mechanism is non-ignorable.
To deal with the topic we introduce a conditional to dropout model. A shape
change in the random effects distribution is considered by directly modeling
the effect of the missing data process on the evolution of the latent
structure. To estimate the resulting model, we rely on the conditional maximum
likelihood approach and for this aim we outline an EM algorithm. The proposal
is illustrated via simulations and then applied on a dataset concerning skin
cancers. Comparisons with other well-established methods are provided as well
A shared-parameter continuous-time hidden Markov and survival model for longitudinal data with informative dropout
A shared-parameter approach for jointly modeling longitudinal and survival data is proposed. With respect to available approaches, it allows for time-varying random effects that affect both the longitudinal and the survival processes. The distribution of these random effects is modeled according to a continuous-time hidden Markov chain so that transitions may occur at any time point. For maximum likelihood estimation, we propose an algorithm based on a discretization of time until censoring in an arbitrary number of time windows. The observed information matrix is used to obtain standard errors. We illustrate the approach by simulation, even with respect to the effect of the number of time windows on the precision of the estimates, and by an application to data about patients suffering from mildly dilated cardiomyopathy
A generalized risk approach to path inference based on hidden Markov models
Motivated by the unceasing interest in hidden Markov models (HMMs), this
paper re-examines hidden path inference in these models, using primarily a
risk-based framework. While the most common maximum a posteriori (MAP), or
Viterbi, path estimator and the minimum error, or Posterior Decoder (PD), have
long been around, other path estimators, or decoders, have been either only
hinted at or applied more recently and in dedicated applications generally
unfamiliar to the statistical learning community. Over a decade ago, however, a
family of algorithmically defined decoders aiming to hybridize the two standard
ones was proposed (Brushe et al., 1998). The present paper gives a careful
analysis of this hybridization approach, identifies several problems and issues
with it and other previously proposed approaches, and proposes practical
resolutions of those. Furthermore, simple modifications of the classical
criteria for hidden path recognition are shown to lead to a new class of
decoders. Dynamic programming algorithms to compute these decoders in the usual
forward-backward manner are presented. A particularly interesting subclass of
such estimators can be also viewed as hybrids of the MAP and PD estimators.
Similar to previously proposed MAP-PD hybrids, the new class is parameterized
by a small number of tunable parameters. Unlike their algorithmic predecessors,
the new risk-based decoders are more clearly interpretable, and, most
importantly, work "out of the box" in practice, which is demonstrated on some
real bioinformatics tasks and data. Some further generalizations and
applications are discussed in conclusion.Comment: Section 5: corrected denominators of the scaled beta variables (pp.
27-30), => corrections in claims 1, 3, Prop. 12, bottom of Table 1. Decoder
(49), Corol. 14 are generalized to handle 0 probabilities. Notation is more
closely aligned with (Bishop, 2006). Details are inserted in eqn-s (43); the
positivity assumption in Prop. 11 is explicit. Fixed typing errors in
equation (41), Example
State-Observation Sampling and the Econometrics of Learning Models
In nonlinear state-space models, sequential learning about the hidden state
can proceed by particle filtering when the density of the observation
conditional on the state is available analytically (e.g. Gordon et al., 1993).
This condition need not hold in complex environments, such as the
incomplete-information equilibrium models considered in financial economics. In
this paper, we make two contributions to the learning literature. First, we
introduce a new filtering method, the state-observation sampling (SOS) filter,
for general state-space models with intractable observation densities. Second,
we develop an indirect inference-based estimator for a large class of
incomplete-information economies. We demonstrate the good performance of these
techniques on an asset pricing model with investor learning applied to over 80
years of daily equity returns
- …