3,550 research outputs found
Particle Learning and Smoothing
Particle learning (PL) provides state filtering, sequential parameter
learning and smoothing in a general class of state space models. Our approach
extends existing particle methods by incorporating the estimation of static
parameters via a fully-adapted filter that utilizes conditional sufficient
statistics for parameters and/or states as particles. State smoothing in the
presence of parameter uncertainty is also solved as a by-product of PL. In a
number of examples, we show that PL outperforms existing particle filtering
alternatives and proves to be a competitor to MCMC.Comment: Published in at http://dx.doi.org/10.1214/10-STS325 the Statistical
Science (http://www.imstat.org/sts/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Using VARs and TVP-VARs with many macroeconomic variables
This paper discusses the challenges faced by the empirical macroeconomist and methods for surmounting them. These challenges arise due to the fact that macroeconometric models potentially include a large number of variables and allow for time variation in parameters. These considerations lead to models which have a large number of parameters to estimate relative to the number of observations. A wide range of approaches are surveyed which aim to overcome the resulting problems. We stress the related themes of prior shrinkage, model averaging and model selection. Subsequently, we consider a particular modelling approach in detail. This involves the use of dynamic model selection methods with large TVP-VARs. A forecasting exercise involving a large US macroeconomic data set illustrates the practicality and empirical success of our approach
A unified approach to mortality modelling using state-space framework: characterisation, identification, estimation and forecasting
This paper explores and develops alternative statistical representations and
estimation approaches for dynamic mortality models. The framework we adopt is
to reinterpret popular mortality models such as the Lee-Carter class of models
in a general state-space modelling methodology, which allows modelling,
estimation and forecasting of mortality under a unified framework. Furthermore,
we propose an alternative class of model identification constraints which is
more suited to statistical inference in filtering and parameter estimation
settings based on maximization of the marginalized likelihood or in Bayesian
inference. We then develop a novel class of Bayesian state-space models which
incorporate apriori beliefs about the mortality model characteristics as well
as for more flexible and appropriate assumptions relating to heteroscedasticity
that present in observed mortality data. We show that multiple period and
cohort effect can be cast under a state-space structure. To study long term
mortality dynamics, we introduce stochastic volatility to the period effect.
The estimation of the resulting stochastic volatility model of mortality is
performed using a recent class of Monte Carlo procedure specifically designed
for state and parameter estimation in Bayesian state-space models, known as the
class of particle Markov chain Monte Carlo methods. We illustrate the framework
we have developed using Danish male mortality data, and show that incorporating
heteroscedasticity and stochastic volatility markedly improves model fit
despite an increase of model complexity. Forecasting properties of the enhanced
models are examined with long term and short term calibration periods on the
reconstruction of life tables.Comment: 46 page
A Nonparametric Adaptive Nonlinear Statistical Filter
We use statistical learning methods to construct an adaptive state estimator
for nonlinear stochastic systems. Optimal state estimation, in the form of a
Kalman filter, requires knowledge of the system's process and measurement
uncertainty. We propose that these uncertainties can be estimated from
(conditioned on) past observed data, and without making any assumptions of the
system's prior distribution. The system's prior distribution at each time step
is constructed from an ensemble of least-squares estimates on sub-sampled sets
of the data via jackknife sampling. As new data is acquired, the state
estimates, process uncertainty, and measurement uncertainty are updated
accordingly, as described in this manuscript.Comment: Accepted at the 2014 IEEE Conference on Decision and Contro
Covariance estimation for multivariate conditionally Gaussian dynamic linear models
In multivariate time series, the estimation of the covariance matrix of the
observation innovations plays an important role in forecasting as it enables
the computation of the standardized forecast error vectors as well as it
enables the computation of confidence bounds of the forecasts. We develop an
on-line, non-iterative Bayesian algorithm for estimation and forecasting. It is
empirically found that, for a range of simulated time series, the proposed
covariance estimator has good performance converging to the true values of the
unknown observation covariance matrix. Over a simulated time series, the new
method approximates the correct estimates, produced by a non-sequential Monte
Carlo simulation procedure, which is used here as the gold standard. The
special, but important, vector autoregressive (VAR) and time-varying VAR models
are illustrated by considering London metal exchange data consisting of spot
prices of aluminium, copper, lead and zinc.Comment: 21 pages, 2 figures, 6 table
Parameter Identification in a Probabilistic Setting
Parameter identification problems are formulated in a probabilistic language,
where the randomness reflects the uncertainty about the knowledge of the true
values. This setting allows conceptually easily to incorporate new information,
e.g. through a measurement, by connecting it to Bayes's theorem. The unknown
quantity is modelled as a (may be high-dimensional) random variable. Such a
description has two constituents, the measurable function and the measure. One
group of methods is identified as updating the measure, the other group changes
the measurable function. We connect both groups with the relatively recent
methods of functional approximation of stochastic problems, and introduce
especially in combination with the second group of methods a new procedure
which does not need any sampling, hence works completely deterministically. It
also seems to be the fastest and more reliable when compared with other
methods. We show by example that it also works for highly nonlinear non-smooth
problems with non-Gaussian measures.Comment: 29 pages, 16 figure
- …