4,832 research outputs found
Quasi maximum likelihood estimation for strongly mixing state space models and multivariate L\'evy-driven CARMA processes
We consider quasi maximum likelihood (QML) estimation for general
non-Gaussian discrete-ime linear state space models and equidistantly observed
multivariate L\'evy-driven continuoustime autoregressive moving average
(MCARMA) processes. In the discrete-time setting, we prove strong consistency
and asymptotic normality of the QML estimator under standard moment assumptions
and a strong-mixing condition on the output process of the state space model.
In the second part of the paper, we investigate probabilistic and analytical
properties of equidistantly sampled continuous-time state space models and
apply our results from the discrete-time setting to derive the asymptotic
properties of the QML estimator of discretely recorded MCARMA processes. Under
natural identifiability conditions, the estimators are again consistent and
asymptotically normally distributed for any sampling frequency. We also
demonstrate the practical applicability of our method through a simulation
study and a data example from econometrics
A Random Matrix Approach to Dynamic Factors in macroeconomic data
We show how random matrix theory can be applied to develop new algorithms to
extract dynamic factors from macroeconomic time series. In particular, we
consider a limit where the number of random variables N and the number of
consecutive time measurements T are large but the ratio N / T is fixed. In this
regime the underlying random matrices are asymptotically equivalent to Free
Random Variables (FRV).Application of these methods for macroeconomic
indicators for Poland economy is also presented.Comment: arXiv admin note: text overlap with arXiv:physics/0512090 by other
author
Sparse Identification and Estimation of Large-Scale Vector AutoRegressive Moving Averages
The Vector AutoRegressive Moving Average (VARMA) model is fundamental to the
theory of multivariate time series; however, in practice, identifiability
issues have led many authors to abandon VARMA modeling in favor of the simpler
Vector AutoRegressive (VAR) model. Such a practice is unfortunate since even
very simple VARMA models can have quite complicated VAR representations. We
narrow this gap with a new optimization-based approach to VARMA identification
that is built upon the principle of parsimony. Among all equivalent
data-generating models, we seek the parameterization that is "simplest" in a
certain sense. A user-specified strongly convex penalty is used to measure
model simplicity, and that same penalty is then used to define an estimator
that can be efficiently computed. We show that our estimator converges to a
parsimonious element in the set of all equivalent data-generating models, in a
double asymptotic regime where the number of component time series is allowed
to grow with sample size. Further, we derive non-asymptotic upper bounds on the
estimation error of our method relative to our specially identified target.
Novel theoretical machinery includes non-asymptotic analysis of infinite-order
VAR, elastic net estimation under a singular covariance structure of
regressors, and new concentration inequalities for quadratic forms of random
variables from Gaussian time series. We illustrate the competitive performance
of our methods in simulation and several application domains, including
macro-economic forecasting, demand forecasting, and volatility forecasting
Joint Covariance Estimation with Mutual Linear Structure
We consider the problem of joint estimation of structured covariance
matrices. Assuming the structure is unknown, estimation is achieved using
heterogeneous training sets. Namely, given groups of measurements coming from
centered populations with different covariances, our aim is to determine the
mutual structure of these covariance matrices and estimate them. Supposing that
the covariances span a low dimensional affine subspace in the space of
symmetric matrices, we develop a new efficient algorithm discovering the
structure and using it to improve the estimation. Our technique is based on the
application of principal component analysis in the matrix space. We also derive
an upper performance bound of the proposed algorithm in the Gaussian scenario
and compare it with the Cramer-Rao lower bound. Numerical simulations are
presented to illustrate the performance benefits of the proposed method
- …