546 research outputs found

    Adjusted empirical likelihood with high-order precision

    Full text link
    Empirical likelihood is a popular nonparametric or semi-parametric statistical method with many nice statistical properties. Yet when the sample size is small, or the dimension of the accompanying estimating function is high, the application of the empirical likelihood method can be hindered by low precision of the chi-square approximation and by nonexistence of solutions to the estimating equations. In this paper, we show that the adjusted empirical likelihood is effective at addressing both problems. With a specific level of adjustment, the adjusted empirical likelihood achieves the high-order precision of the Bartlett correction, in addition to the advantage of a guaranteed solution to the estimating equations. Simulation results indicate that the confidence regions constructed by the adjusted empirical likelihood have coverage probabilities comparable to or substantially more accurate than the original empirical likelihood enhanced by the Bartlett correction.Comment: Published in at http://dx.doi.org/10.1214/09-AOS750 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Hypothesis test for normal mixture models: The EM approach

    Full text link
    Normal mixture distributions are arguably the most important mixture models, and also the most technically challenging. The likelihood function of the normal mixture model is unbounded based on a set of random samples, unless an artificial bound is placed on its component variance parameter. Moreover, the model is not strongly identifiable so it is hard to differentiate between over dispersion caused by the presence of a mixture and that caused by a large variance, and it has infinite Fisher information with respect to mixing proportions. There has been extensive research on finite normal mixture models, but much of it addresses merely consistency of the point estimation or useful practical procedures, and many results require undesirable restrictions on the parameter space. We show that an EM-test for homogeneity is effective at overcoming many challenges in the context of finite normal mixtures. We find that the limiting distribution of the EM-test is a simple function of the 0.5Ο‡02+0.5Ο‡120.5\chi^2_0+0.5\chi^2_1 and Ο‡12\chi^2_1 distributions when the mixing variances are equal but unknown and the Ο‡22\chi^2_2 when variances are unequal and unknown. Simulations show that the limiting distributions approximate the finite sample distribution satisfactorily. Two genetic examples are used to illustrate the application of the EM-test.Comment: Published in at http://dx.doi.org/10.1214/08-AOS651 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    A robust error estimator and a residual-free error indicator for reduced basis methods

    Full text link
    The Reduced Basis Method (RBM) is a rigorous model reduction approach for solving parametrized partial differential equations. It identifies a low-dimensional subspace for approximation of the parametric solution manifold that is embedded in high-dimensional space. A reduced order model is subsequently constructed in this subspace. RBM relies on residual-based error indicators or {\em a posteriori} error bounds to guide construction of the reduced solution subspace, to serve as a stopping criteria, and to certify the resulting surrogate solutions. Unfortunately, it is well-known that the standard algorithm for residual norm computation suffers from premature stagnation at the level of the square root of machine precision. In this paper, we develop two alternatives to the standard offline phase of reduced basis algorithms. First, we design a robust strategy for computation of residual error indicators that allows RBM algorithms to enrich the solution subspace with accuracy beyond root machine precision. Secondly, we propose a new error indicator based on the Lebesgue function in interpolation theory. This error indicator does not require computation of residual norms, and instead only requires the ability to compute the RBM solution. This residual-free indicator is rigorous in that it bounds the error committed by the RBM approximation, but up to an uncomputable multiplicative constant. Because of this, the residual-free indicator is effective in choosing snapshots during the offline RBM phase, but cannot currently be used to certify error that the approximation commits. However, it circumvents the need for \textit{a posteriori} analysis of numerical methods, and therefore can be effective on problems where such a rigorous estimate is hard to derive
    • …
    corecore