85 research outputs found

    Nonparametric estimation of mean-squared prediction error in nested-error regression models

    Get PDF
    Nested-error regression models are widely used for analyzing clustered data. For example, they are often applied to two-stage sample surveys, and in biology and econometrics. Prediction is usually the main goal of such analyses, and mean-squared prediction error is the main way in which prediction performance is measured. In this paper we suggest a new approach to estimating mean-squared prediction error. We introduce a matched-moment, double-bootstrap algorithm, enabling the notorious underestimation of the naive mean-squared error estimator to be substantially reduced. Our approach does not require specific assumptions about the distributions of errors. Additionally, it is simple and easy to apply. This is achieved through using Monte Carlo simulation to implicitly develop formulae which, in a more conventional approach, would be derived laboriously by mathematical arguments.Supported in part by NSF Grant SES-03-18184

    Nonparametric estimation of mean-squared prediction error in nested-error regression models

    Full text link
    Nested-error regression models are widely used for analyzing clustered data. For example, they are often applied to two-stage sample surveys, and in biology and econometrics. Prediction is usually the main goal of such analyses, and mean-squared prediction error is the main way in which prediction performance is measured. In this paper we suggest a new approach to estimating mean-squared prediction error. We introduce a matched-moment, double-bootstrap algorithm, enabling the notorious underestimation of the naive mean-squared error estimator to be substantially reduced. Our approach does not require specific assumptions about the distributions of errors. Additionally, it is simple and easy to apply. This is achieved through using Monte Carlo simulation to implicitly develop formulae which, in a more conventional approach, would be derived laboriously by mathematical arguments.Comment: Published at http://dx.doi.org/10.1214/009053606000000579 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    P. C. Mahalanobis in the Context of Current Econometrics Research

    Get PDF

    A comprehensive study of spike and slab shrinkage priors for structurally sparse Bayesian neural networks

    Full text link
    Network complexity and computational efficiency have become increasingly significant aspects of deep learning. Sparse deep learning addresses these challenges by recovering a sparse representation of the underlying target function by reducing heavily over-parameterized deep neural networks. Specifically, deep neural architectures compressed via structured sparsity (e.g. node sparsity) provide low latency inference, higher data throughput, and reduced energy consumption. In this paper, we explore two well-established shrinkage techniques, Lasso and Horseshoe, for model compression in Bayesian neural networks. To this end, we propose structurally sparse Bayesian neural networks which systematically prune excessive nodes with (i) Spike-and-Slab Group Lasso (SS-GL), and (ii) Spike-and-Slab Group Horseshoe (SS-GHS) priors, and develop computationally tractable variational inference including continuous relaxation of Bernoulli variables. We establish the contraction rates of the variational posterior of our proposed models as a function of the network topology, layer-wise node cardinalities, and bounds on the network weights. We empirically demonstrate the competitive performance of our models compared to the baseline models in prediction accuracy, model compression, and inference latency
    • …
    corecore