2,587 research outputs found

    Testing the order of a model

    Full text link
    This paper deals with order identification for nested models in the i.i.d. framework. We study the asymptotic efficiency of two generalized likelihood ratio tests of the order. They are based on two estimators which are proved to be strongly consistent. A version of Stein's lemma yields an optimal underestimation error exponent. The lemma also implies that the overestimation error exponent is necessarily trivial. Our tests admit nontrivial underestimation error exponents. The optimal underestimation error exponent is achieved in some situations. The overestimation error can decay exponentially with respect to a positive power of the number of observations. These results are proved under mild assumptions by relating the underestimation (resp. overestimation) error to large (resp. moderate) deviations of the log-likelihood process. In particular, it is not necessary that the classical Cram\'{e}r condition be satisfied; namely, the log⁑\log-densities are not required to admit every exponential moment. Three benchmark examples with specific difficulties (location mixture of normal distributions, abrupt changes and various regressions) are detailed so as to illustrate the generality of our results.Comment: Published at http://dx.doi.org/10.1214/009053606000000344 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Moderate deviations for stabilizing functionals in geometric probability

    Full text link
    The purpose of the present paper is to establish explicit bounds on moderate deviation probabilities for a rather general class of geometric functionals enjoying the stabilization property, under Poisson input and the assumption of a certain control over the growth of the moments of the functional and its radius of stabilization. Our proof techniques rely on cumulant expansions and cluster measures and yield completely explicit bounds on deviation probabilities. In addition, we establish a new criterion for the limiting variance to be non-degenerate. Moreover, our main result provides a new central limit theorem, which, though stated under strong moment assumptions, does not require bounded support of the intensity of the Poisson input. We apply our results to three groups of examples: random packing models, geometric functionals based on Euclidean nearest neighbors and the sphere of influence graphs.Comment: 52 page

    Functional Regression

    Full text link
    Functional data analysis (FDA) involves the analysis of data whose ideal units of observation are functions defined on some continuous domain, and the observed data consist of a sample of functions taken from some population, sampled on a discrete grid. Ramsay and Silverman's 1997 textbook sparked the development of this field, which has accelerated in the past 10 years to become one of the fastest growing areas of statistics, fueled by the growing number of applications yielding this type of data. One unique characteristic of FDA is the need to combine information both across and within functions, which Ramsay and Silverman called replication and regularization, respectively. This article will focus on functional regression, the area of FDA that has received the most attention in applications and methodological development. First will be an introduction to basis functions, key building blocks for regularization in functional regression methods, followed by an overview of functional regression methods, split into three types: [1] functional predictor regression (scalar-on-function), [2] functional response regression (function-on-scalar) and [3] function-on-function regression. For each, the role of replication and regularization will be discussed and the methodological development described in a roughly chronological manner, at times deviating from the historical timeline to group together similar methods. The primary focus is on modeling and methodology, highlighting the modeling structures that have been developed and the various regularization approaches employed. At the end is a brief discussion describing potential areas of future development in this field

    The composite absolute penalties family for grouped and hierarchical variable selection

    Full text link
    Extracting useful information from high-dimensional data is an important focus of today's statistical research and practice. Penalized loss function minimization has been shown to be effective for this task both theoretically and empirically. With the virtues of both regularization and sparsity, the L1L_1-penalized squared error minimization method Lasso has been popular in regression models and beyond. In this paper, we combine different norms including L1L_1 to form an intelligent penalty in order to add side information to the fitting of a regression or classification model to obtain reasonable estimates. Specifically, we introduce the Composite Absolute Penalties (CAP) family, which allows given grouping and hierarchical relationships between the predictors to be expressed. CAP penalties are built by defining groups and combining the properties of norm penalties at the across-group and within-group levels. Grouped selection occurs for nonoverlapping groups. Hierarchical variable selection is reached by defining groups with particular overlapping patterns. We propose using the BLASSO and cross-validation to compute CAP estimates in general. For a subfamily of CAP estimates involving only the L1L_1 and L∞L_{\infty} norms, we introduce the iCAP algorithm to trace the entire regularization path for the grouped selection problem. Within this subfamily, unbiased estimates of the degrees of freedom (df) are derived so that the regularization parameter is selected without cross-validation. CAP is shown to improve on the predictive performance of the LASSO in a series of simulated experiments, including cases with p≫np\gg n and possibly mis-specified groupings. When the complexity of a model is properly calculated, iCAP is seen to be parsimonious in the experiments.Comment: Published in at http://dx.doi.org/10.1214/07-AOS584 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Nonconvex factor adjustments in equilibrium business cycle models: do nonlinearities matter?

    Get PDF
    Using an equilibrium business cycle model, the authors search for aggregate nonlinearities arising from the introduction of nonconvex capital adjustment costs. The authors find that while such adjustment costs lead to nontrivial nonlinearities in aggregate investment demand, equilibrium investment is effectively unchanged. This finding, based on a model in which aggregate fluctuations arise through exogenous changes in total factor productivity, is robust to the introduction of shocks to the relative price of investment goods.Business cycles
    • …
    corecore