123 research outputs found

    Continuous invertibility and stable QML estimation of the EGARCH(1,1) model

    Get PDF
    We introduce the notion of continuous invertibility on a compact set for volatility models driven by a Stochastic Recurrence Equation (SRE). We prove the strong consistency of the Quasi Maximum Likelihood Estimator (QMLE) when the optimization procedure is done on a continuously invertible domain. This approach gives for the first time the strong consistency of the QMLE used by Nelson in \cite{nelson:1991} for the EGARCH(1,1) model under explicit but non observable conditions. In practice, we propose to stabilize the QMLE by constraining the optimization procedure to an empirical continuously invertible domain. The new method, called Stable QMLE (SQMLE), is strongly consistent when the observations follow an invertible EGARCH(1,1) model. We also give the asymptotic normality of the SQMLE under additional minimal assumptions

    Deviation inequalities for sums of weakly dependent time series

    Full text link
    In this paper we give new deviation inequalities of Bernstein's type for the partial sums of weakly dependent time series. The loss from the independent case is studied carefully. We give non mixing examples such that dynamical systems and Bernoulli shifts for whom our deviation inequalities hold. The proofs are based on the blocks technique and different coupling arguments.Comment: 14 page

    Exponential inequalities for unbounded functions of geometrically ergodic Markov chains. Applications to quantitative error bounds for regenerative Metropolis algorithms

    Full text link
    The aim of this note is to investigate the concentration properties of unbounded functions of geometrically ergodic Markov chains. We derive concentration properties of centered functions with respect to the square of the Lyapunov's function in the drift condition satisfied by the Markov chain. We apply the new exponential inequalities to derive confidence intervals for MCMC algorithms. Quantitative error bounds are providing for the regenerative Metropolis algorithm of [5]

    Model selection for weakly dependent time series forecasting

    Full text link
    Observing a stationary time series, we propose a two-step procedure for the prediction of the next value of the time series. The first step follows machine learning theory paradigm and consists in determining a set of possible predictors as randomized estimators in (possibly numerous) different predictive models. The second step follows the model selection paradigm and consists in choosing one predictor with good properties among all the predictors of the first steps. We study our procedure for two different types of bservations: causal Bernoulli shifts and bounded weakly dependent processes. In both cases, we give oracle inequalities: the risk of the chosen predictor is close to the best prediction risk in all predictive models that we consider. We apply our procedure for predictive models such as linear predictors, neural networks predictors and non-parametric autoregressive

    The cluster index of regularly varying sequences with applications to limit theory for functions of multivariate Markov chains

    Full text link
    We introduce the cluster index of a multivariate regularly varying stationary sequence and characterize the index in terms of the spectral tail process. This index plays a major role in limit theory for partial sums of regularly varying sequences. We illustrate the use of the cluster index by characterizing infinite variance stable limit distributions and precise large deviation results for sums of multivariate functions acting on a stationary Markov chain under a drift condition

    Convergence rates for density estimators of weakly dependent time series

    Full text link
    Assuming that (Xt)t∈Z(X_t)_{t\in\Z} is a vector valued time series with a common marginal distribution admitting a density ff, our aim is to provide a wide range of consistent estimators of ff. We consider different methods of estimation of the density as kernel, projection or wavelets ones. Various cases of weakly dependent series are investigated including the Doukhan & Louhichi (1999)'s η\eta-weak dependence condition, and the ϕ~\tilde \phi-dependence of Dedecker & Prieur (2005). We thus obtain results for Markov chains, dynamical systems, bilinear models, non causal Moving Average... From a moment inequality of Doukhan & Louhichi (1999), we provide convergence rates of the term of error for the estimation with the \L^q loss or almost surely, uniformly on compact subsets

    Precise large deviations for dependent regularly varying sequences

    Full text link
    We study a precise large deviation principle for a stationary regularly varying sequence of random variables. This principle extends the classical results of A.V. Nagaev (1969) and S.V. Nagaev (1979) for iid regularly varying sequences. The proof uses an idea of Jakubowski (1993,1997) in the context of centra limit theorems with infinite variance stable limits. We illustrate the principle for \sv\ models, functions of a Markov chain satisfying a polynomial drift condition and solutions of linear and non-linear stochastic recurrence equations

    Fast rates in learning with dependent observations

    Get PDF
    In this paper we tackle the problem of fast rates in time series forecasting from a statistical learning perspective. In a serie of papers (e.g. Meir 2000, Modha and Masry 1998, Alquier and Wintenberger 2012) it is shown that the main tools used in learning theory with iid observations can be extended to the prediction of time series. The main message of these papers is that, given a family of predictors, we are able to build a new predictor that predicts the series as well as the best predictor in the family, up to a remainder of order 1/n1/\sqrt{n}. It is known that this rate cannot be improved in general. In this paper, we show that in the particular case of the least square loss, and under a strong assumption on the time series (phi-mixing) the remainder is actually of order 1/n1/n. Thus, the optimal rate for iid variables, see e.g. Tsybakov 2003, and individual sequences, see \cite{lugosi} is, for the first time, achieved for uniformly mixing processes. We also show that our method is optimal for aggregating sparse linear combinations of predictors
    • …
    corecore