128 research outputs found

    On the Coverage Bound Problem of Empirical Likelihood Methods For Time Series

    Full text link
    The upper bounds on the coverage probabilities of the confidence regions based on blockwise empirical likelihood [Kitamura (1997)] and nonstandard expansive empirical likelihood [Nordman et al. (2013)] methods for time series data are investigated via studying the probability for the violation of the convex hull constraint. The large sample bounds are derived on the basis of the pivotal limit of the blockwise empirical log-likelihood ratio obtained under the fixed-b asymptotics, which has been recently shown to provide a more accurate approximation to the finite sample distribution than the conventional chi-square approximation. Our theoretical and numerical findings suggest that both the finite sample and large sample upper bounds for coverage probabilities are strictly less than one and the blockwise empirical likelihood confidence region can exhibit serious undercoverage when (i) the dimension of moment conditions is moderate or large; (ii) the time series dependence is positively strong; or (iii) the block size is large relative to sample size. A similar finite sample coverage problem occurs for the nonstandard expansive empirical likelihood. To alleviate the coverage bound problem, we propose to penalize both empirical likelihood methods by relaxing the convex hull constraint. Numerical simulations and data illustration demonstrate the effectiveness of our proposed remedies in terms of delivering confidence sets with more accurate coverage

    Fixed-smoothing asymptotics for time series

    Full text link
    In this paper, we derive higher order Edgeworth expansions for the finite sample distributions of the subsampling-based t-statistic and the Wald statistic in the Gaussian location model under the so-called fixed-smoothing paradigm. In particular, we show that the error of asymptotic approximation is at the order of the reciprocal of the sample size and obtain explicit forms for the leading error terms in the expansions. The results are used to justify the second-order correctness of a new bootstrap method, the Gaussian dependent bootstrap, in the context of Gaussian location model.Comment: Published in at http://dx.doi.org/10.1214/13-AOS1113 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Adaptive Testing for Alphas in High-dimensional Factor Pricing Models

    Full text link
    This paper proposes a new procedure to validate the multi-factor pricing theory by testing the presence of alpha in linear factor pricing models with a large number of assets. Because the market's inefficient pricing is likely to occur to a small fraction of exceptional assets, we develop a testing procedure that is particularly powerful against sparse signals. Based on the high-dimensional Gaussian approximation theory, we propose a simulation-based approach to approximate the limiting null distribution of the test. Our numerical studies show that the new procedure can deliver a reasonable size and achieve substantial power improvement compared to the existing tests under sparse alternatives, and especially for weak signals

    Structure Adaptive Lasso

    Full text link
    Lasso is of fundamental importance in high-dimensional statistics and has been routinely used to regress a response on a high-dimensional set of predictors. In many scientific applications, there exists external information that encodes the predictive power and sparsity structure of the predictors. In this article, we develop a new method, called the Structure Adaptive Lasso (SA-Lasso), to incorporate these potentially useful side information into a penalized regression. The basic idea is to translate the external information into different penalization strengths for the regression coefficients. We study the risk properties of the resulting estimator. In particular, we generalize the state evolution framework recently introduced for the analysis of the approximate message-passing algorithm to the SA-Lasso setting. We show that the finite sample risk of the SA-Lasso estimator is consistent with the theoretical risk predicted by the state evolution equation. Our theory suggests that the SA-Lasso with an informative group or covariate structure can significantly outperform the Lasso, Adaptive Lasso, and Sparse Group Lasso. This evidence is further confirmed in our numerical studies. We also demonstrate the usefulness and the superiority of our method in a real data application.Comment: 42 pages, 24 figure

    Joint Mirror Procedure: Controlling False Discovery Rate for Identifying Simultaneous Signals

    Full text link
    In many applications, identifying a single feature of interest requires testing the statistical significance of several hypotheses. Examples include mediation analysis which simultaneously examines the existence of the exposure-mediator and the mediator-outcome effects, and replicability analysis aiming to identify simultaneous signals that exhibit statistical significance across multiple independent experiments. In this work, we develop a novel procedure, named joint mirror (JM), to detect such features while controlling the false discovery rate (FDR) in finite samples. The JM procedure iteratively shrinks the rejection region based on partially revealed information until a conservative false discovery proportion (FDP) estimate is below the target FDR level. We propose an efficient algorithm to implement the method. Extensive simulations demonstrate that our procedure can control the modified FDR, a more stringent error measure than the conventional FDR, and provide power improvement in several settings. Our method is further illustrated through real-world applications in mediation and replicability analyses
    • …
    corecore