108 research outputs found

    Bootstrapping Autoregressions with Conditional Heteroskedasticity of Unknown Form

    Get PDF
    Conditional heteroskedasticity is an important feature of many macroeconomic and financial time series. Standard residual-based bootstrap procedures for dynamic regression models treat the regression error as i.i.d. These procedures are invalid in the presence of conditional heteroskedasticity. We establish the asymptotic validity of three easy-toimplement alternative bootstrap proposals for stationary autoregressive processes with m.d.s. errors subject to possible conditional heteroskedasticity of unknown form. These proposals are the fixed-design wild bootstrap, the recursive-design wild bootstrap and the pairwise bootstrap. In a simulation study all three procedures tend to be more accurate in small samples than the conventional large-sample approximation based on robust standard errors. In contrast, standard residual-based bootstrap methods for models with i.i.d. errors may be very inaccurate if the i.i.d. assumption is violated. We conclude that in many empirical applications the proposed robust bootstrap procedures should routinely replace conventional bootstrap procedures based on the i.i.d. error assumption. -- Bedingte Heteroskedastizität ist eine wichtige Eigenschaft von vielen Daten über Finanzmärkte und die Makroökonomie. Standard bootstrap Verfahren für dynamische Regressionsmodelle behandeln die Residuen der Regression als i. i. d. Bei bedingter Heteroskedastizität sind diese Prozeduren nicht angemessen. Wir zeigen die asymptotische Gültigkeit von 3 alternativen bootstrap Methoden für stationäre autoregressive Prozesse mit m. d. s. Fehler, die eine bedingte Heteroskedastizität unbekannter Form aufweisen. Es geht dabei um ein fixed-design wild bootstrap, den recursive-design wild bootstrap und den paarweisen bootstrap. In einer Simulationsstudie erscheinen alle 3 Prozeduren in kleinen Stichproben angewandt genauer als die konventionellen Approximationen, die auf robusten Standardfehlern basieren. Diese letztgenannten Methoden können dagegen sehr ungenau sein, wenn die i. i. d. Annahme nicht gilt. Wir schließen daraus, dass bei vielen empirischen Anwendungen die robusten bootstrap Verfahren, die hier vorgestellt werden und leicht zu implementieren sind, die üblichen bootstrap Verfahren ersetzen sollten.wild bootstrap,pairwise bootstrap,robust inference

    Limit Theorems for Stochastic Approximations Algorithms With Application to General Urn Models

    Get PDF
    In the present paper we study the multidimensional stochastic approximation algorithms where the drift function h is a smooth function and where jacobian matrix is diagonalizable over C but assuming that all the eigenvalues of this matrix are in the the region Repzq Ä… 0. We give results on the fluctuation of the process around the stable equilibrium point of h. We extend the limit theorem of the one dimensional Robin's Monroe algorithm [MR73]. We give also application of these limit theorem for some class of urn models proving the efficiency of this method

    Fluctuations, stability and instability of a distributed particle filter with local exchange

    Get PDF
    We study a distributed particle filter proposed by Boli\'c et al.~(2005). This algorithm involves mm groups of MM particles, with interaction between groups occurring through a "local exchange" mechanism. We establish a central limit theorem in the regime where MM is fixed and m→∞m\to\infty. A formula we obtain for the asymptotic variance can be interpreted in terms of colliding Markov chains, enabling analytic and numerical evaluations of how the asymptotic variance behaves over time, with comparison to a benchmark algorithm consisting of mm independent particle filters. We prove that subject to regularity conditions, when mm is fixed both algorithms converge time-uniformly at rate M−1/2M^{-1/2}. Through use of our asymptotic variance formula we give counter-examples satisfying the same regularity conditions to show that when MM is fixed neither algorithm, in general, converges time-uniformly at rate m−1/2m^{-1/2}.Comment: 49 pages, 7 figure

    Bayesian Model Based Approaches In The Analysis Of Chromatin Structure And Motif Discovery

    Get PDF
    Efficient detection of transcription factor (TF) binding sites is an important and unsolved problem in computational genomics. Recently, due to the poor predictive ability of motif finding algorithms, along with the recent proliferation of high-throughput genomic technologies, there has been a drive to utilize secondary information, such as the positioning of nucleosomes, for improving predictions. Nucleosomes prevent transcription factor binding at those sites by blocking the TF access to the DNA. We aimed to construct an accurate map of nucleosome-free regions (NFRs), based on data from high-throughput genomic tiling arrays in yeast. Direct use of Hidden Markov Models are not always applicable due to variable-sized gaps and missing data. So we have extended the hidden Markov model procedure to a continuous time version while efficiently incorporating DNA sequence features that are relevant to nucleosome formation. Simulation studies and an application to a yeast nucleosomal assay demonstrate the advantages of the new method. The established biological role of nucleosomes in relation to TF binding, led us to formulate a joint model in the fourth chapter. The algorithm was implemented on the FAIRE data set, and comparisons were made with existing motif search algorithms. The fifth chapter deals with HMM asymptotics. We obtained results on consistency asymptotic normality and contiguity of a hidden Markov model. These have helped our inference on the convergence properties of the posterior and the consistency of the Bayesian posterior estimates. This has led to the conclusion that the Bayesian inference of a HMM run on sufficiently large datasets (which is typical, in the case of genomic data) leads us very close to the underlying true parameters, as in the case of iid models. The result is fairly general in nature to provide the justification for HMM inference in a wide variety of datasets

    Asymptotic Optimality of Conditioned Stochastic Gradient Descent

    Full text link
    In this paper, we investigate a general class of stochastic gradient descent (SGD) algorithms, called conditioned SGD, based on a preconditioning of the gradient direction. Under some mild assumptions, namely the LL-smoothness of the non-convex objective function and some weak growth condition on the noise, we establish the almost sure convergence and the asymptotic normality for a broad class of conditioning matrices. In particular, when the conditioning matrix is an estimate of the inverse Hessian at the optimal point, the algorithm is proved to be asymptotically optimal. The benefits of this approach are validated on simulated and real datasets
    • …
    corecore