59 research outputs found

    Estimation of a sparse group of sparse vectors

    Full text link
    We consider a problem of estimating a sparse group of sparse normal mean vectors. The proposed approach is based on penalized likelihood estimation with complexity penalties on the number of nonzero mean vectors and the numbers of their "significant" components, and can be performed by a computationally fast algorithm. The resulting estimators are developed within Bayesian framework and can be viewed as MAP estimators. We establish their adaptive minimaxity over a wide range of sparse and dense settings. The presented short simulation study demonstrates the efficiency of the proposed approach that successfully competes with the recently developed sparse group lasso estimator

    Model selection in regression under structural constraints

    Full text link
    The paper considers model selection in regression under the additional structural constraints on admissible models where the number of potential predictors might be even larger than the available sample size. We develop a Bayesian formalism as a natural tool for generating a wide class of model selection criteria based on penalized least squares estimation with various complexity penalties associated with a prior on a model size. The resulting criteria are adaptive to structural constraints. We establish the upper bound for the quadratic risk of the resulting MAP estimator and the corresponding lower bound for the minimax risk over a set of admissible models of a given size. We then specify the class of priors (and, therefore, the class of complexity penalties) where for the "nearly-orthogonal" design the MAP estimator is asymptotically at least nearly-minimax (up to a log-factor) simultaneously over an entire range of sparse and dense setups. Moreover, when the numbers of admissible models are "small" (e.g., ordered variable selection) or, on the opposite, for the case of complete variable selection, the proposed estimator achieves the exact minimax rates.Comment: arXiv admin note: text overlap with arXiv:0912.438

    Laplace deconvolution with noisy observations

    Get PDF
    In the present paper we consider Laplace deconvolution for discrete noisy data observed on the interval whose length may increase with a sample size. Although this problem arises in a variety of applications, to the best of our knowledge, it has been given very little attention by the statistical community. Our objective is to fill this gap and provide statistical treatment of Laplace deconvolution problem with noisy discrete data. The main contribution of the paper is explicit construction of an asymptotically rate-optimal (in the minimax sense) Laplace deconvolution estimator which is adaptive to the regularity of the unknown function. We show that the original Laplace deconvolution problem can be reduced to nonparametric estimation of a regression function and its derivatives on the interval of growing length T_n. Whereas the forms of the estimators remain standard, the choices of the parameters and the minimax convergence rates, which are expressed in terms of T_n^2/n in this case, are affected by the asymptotic growth of the length of the interval. We derive an adaptive kernel estimator of the function of interest, and establish its asymptotic minimaxity over a range of Sobolev classes. We illustrate the theory by examples of construction of explicit expressions of Laplace deconvolution estimators. A simulation study shows that, in addition to providing asymptotic optimality as the number of observations turns to infinity, the proposed estimator demonstrates good performance in finite sample examples

    Statistical learning by sparse deep neural networks

    Full text link
    We consider a deep neural network estimator based on empirical risk minimization with l_1-regularization. We derive a general bound for its excess risk in regression and classification (including multiclass), and prove that it is adaptively nearly-minimax (up to log-factors) simultaneously across the entire range of various function classes

    Solution of linear ill-posed problems by model selection and aggregation

    Full text link
    We consider a general statistical linear inverse problem, where the solution is represented via a known (possibly overcomplete) dictionary that allows its sparse representation. We propose two different approaches. A model selection estimator selects a single model by minimizing the penalized empirical risk over all possible models. By contrast with direct problems, the penalty depends on the model itself rather than on its size only as for complexity penalties. A Q-aggregate estimator averages over the entire collection of estimators with properly chosen weights. Under mild conditions on the dictionary, we establish oracle inequalities both with high probability and in expectation for the two estimators. Moreover, for the latter estimator these inequalities are sharp. The proposed procedures are implemented numerically and their performance is assessed by a simulation study.Comment: 20 pages, 2 figure
    • …
    corecore