7,841 research outputs found

    Smear fitting: a new deconvolution method for interferometric data

    Full text link
    A new technique is presented for producing images from interferometric data. The method, ``smear fitting'', makes the constraints necessary for interferometric imaging double as a model, with uncertainties, of the sky brightness distribution. It does this by modelling the sky with a set of functions and then convolving each component with its own elliptical gaussian to account for the uncertainty in its shape and location that arises from noise. This yields much sharper resolution than CLEAN for significantly detected features, without sacrificing any sensitivity. Using appropriate functional forms for the components provides both a scientifically interesting model and imaging constraints that tend to be better than those used by traditional deconvolution methods. This allows it to avoid the most serious problems that limit the imaging quality of those methods. Comparisons of smear fitting to CLEAN and maximum entropy are given, using both real and simulated observations. It is also shown that the famous Rayleigh criterion (resolution = wavelength / baseline) is inappropriate for interferometers as it does not consider the reliability of the measurements.Comment: 16 pages, 38 figures (some have been lossily compressed for astro-ph). Uses the hyperref LaTeX package. Accepted for publication by the Monthly Notices of the Royal Astronomical Societ

    Post-selection point and interval estimation of signal sizes in Gaussian samples

    Full text link
    We tackle the problem of the estimation of a vector of means from a single vector-valued observation yy. Whereas previous work reduces the size of the estimates for the largest (absolute) sample elements via shrinkage (like James-Stein) or biases estimated via empirical Bayes methodology, we take a novel approach. We adapt recent developments by Lee et al (2013) in post selection inference for the Lasso to the orthogonal setting, where sample elements have different underlying signal sizes. This is exactly the setup encountered when estimating many means. It is shown that other selection procedures, like selecting the KK largest (absolute) sample elements and the Benjamini-Hochberg procedure, can be cast into their framework, allowing us to leverage their results. Point and interval estimates for signal sizes are proposed. These seem to perform quite well against competitors, both recent and more tenured. Furthermore, we prove an upper bound to the worst case risk of our estimator, when combined with the Benjamini-Hochberg procedure, and show that it is within a constant multiple of the minimax risk over a rich set of parameter spaces meant to evoke sparsity.Comment: 27 pages, 13 figure

    Generalised Mixability, Constant Regret, and Bayesian Updating

    Full text link
    Mixability of a loss is known to characterise when constant regret bounds are achievable in games of prediction with expert advice through the use of Vovk's aggregating algorithm. We provide a new interpretation of mixability via convex analysis that highlights the role of the Kullback-Leibler divergence in its definition. This naturally generalises to what we call Φ\Phi-mixability where the Bregman divergence DΦD_\Phi replaces the KL divergence. We prove that losses that are Φ\Phi-mixable also enjoy constant regret bounds via a generalised aggregating algorithm that is similar to mirror descent.Comment: 12 page
    • …
    corecore