2,599 research outputs found

    IMPROVING THE UNIVERSITY'S PERFORMANCE IN PUBLIC POLICY EDUCATION

    Get PDF
    Teaching/Communication/Extension/Profession,

    Rectification of a whole-sky photograph as a tool for determining spatial positioning of cumulus clouds

    Get PDF
    There are no author-identified significant results in this report

    Sharp Oracle Inequalities for Square Root Regularization

    Full text link
    We study a set of regularization methods for high-dimensional linear regression models. These penalized estimators have the square root of the residual sum of squared errors as loss function, and any weakly decomposable norm as penalty function. This fit measure is chosen because of its property that the estimator does not depend on the unknown standard deviation of the noise. On the other hand, a generalized weakly decomposable norm penalty is very useful in being able to deal with different underlying sparsity structures. We can choose a different sparsity inducing norm depending on how we want to interpret the unknown parameter vector β\beta. Structured sparsity norms, as defined in Micchelli et al. [18], are special cases of weakly decomposable norms, therefore we also include the square root LASSO (Belloni et al. [3]), the group square root LASSO (Bunea et al. [10]) and a new method called the square root SLOPE (in a similar fashion to the SLOPE from Bogdan et al. [6]). For this collection of estimators our results provide sharp oracle inequalities with the Karush-Kuhn-Tucker conditions. We discuss some examples of estimators. Based on a simulation we illustrate some advantages of the square root SLOPE

    χ2\chi^2-confidence sets in high-dimensional regression

    Full text link
    We study a high-dimensional regression model. Aim is to construct a confidence set for a given group of regression coefficients, treating all other regression coefficients as nuisance parameters. We apply a one-step procedure with the square-root Lasso as initial estimator and a multivariate square-root Lasso for constructing a surrogate Fisher information matrix. The multivariate square-root Lasso is based on nuclear norm loss with ℓ1\ell_1-penalty. We show that this procedure leads to an asymptotically χ2\chi^2-distributed pivot, with a remainder term depending only on the ℓ1\ell_1-error of the initial estimator. We show that under ℓ1\ell_1-sparsity conditions on the regression coefficients β0\beta^0 the square-root Lasso produces to a consistent estimator of the noise variance and we establish sharp oracle inequalities which show that the remainder term is small under further sparsity conditions on β0\beta^0 and compatibility conditions on the design.Comment: 22 page

    A remark on nets of threshold elements

    Get PDF
    The necessary condition for transition functions of Simple McCulloch-Pitts Nets, given in the article: “Nets of Threshold Elements≓ by Kenneth Krohn and John Rhodes in Corollary 3.1. (iii) is enlarged to a sufficient criterion by addition of another necessary condition

    A Proposal to Enhance Mentoring at SPEA IUPUI

    Get PDF
    Short talk presentation slide
    • …
    corecore