1,104 research outputs found

    Powellsnakes II: a fast Bayesian approach to discrete object detection in multi-frequency astronomical data sets

    Get PDF
    Powellsnakes is a Bayesian algorithm for detecting compact objects embedded in a diffuse background, and was selected and successfully employed by the Planck consortium in the production of its first public deliverable: the Early Release Compact Source Catalogue (ERCSC). We present the critical foundations and main directions of further development of PwS, which extend it in terms of formal correctness and the optimal use of all the available information in a consistent unified framework, where no distinction is made between point sources (unresolved objects), SZ clusters, single or multi-channel detection. An emphasis is placed on the necessity of a multi-frequency, multi-model detection algorithm in order to achieve optimality

    Sparse functional regression models: minimax rates and contamination

    Get PDF
    In functional linear regression and functional generalized linear regression models, the effect of the predictor function is usually assumed to be spread across the index space. In this dissertation we consider the sparse functional linear model and the sparse functional generalized linear models (GLM), where the impact of the predictor process on the response is only via its value at one point in the index space, defined as the sensitive point. We are particularly interested in estimating the sensitive point. The minimax rate of convergence for estimating the parameters in sparse functional linear regression is derived. It is shown that the optimal rate for estimating the sensitive point depends on the roughness of the predictor function, which is quantified by a "generalized Hurst exponent". The least squares estimator (LSE) is shown to attain the optimal rate. Also, a lower bound is given on the minimax risk of estimating the parameters in sparse functional GLM, which also depends on the generalized Hurst exponent of the predictor process. The order of the minimax lower bound is the same as that of the weak convergence rate of the maximum likelihood estimator (MLE), given that the functional predictor behaves like a Brownian motion
    corecore