20,119 research outputs found

    Limitations for shapelet-based weak-lensing measurements

    Full text link
    We seek to understand the impact on shape estimators obtained from circular and elliptical shapelet models under two realistic conditions: (a) only a limited number of shapelet modes is available for the model, and (b) the intrinsic galactic shapes are not restricted to shapelet models. We create a set of simplistic simulations, in which the galactic shapes follow a Sersic profile. By varying the Sersic index and applied shear, we quantify the amount of bias on shear estimates which arises from insufficient modeling. Additional complications due to PSF convolution, pixelation and pixel noise are also discussed. Steep and highly elliptical galaxy shapes cannot be accurately modeled within the circular shapelet basis system and are biased towards shallower and less elongated shapes. This problem can be cured partially by allowing elliptical basis functions, but for steep profiles elliptical shapelet models still depend critically on accurate ellipticity priors. As a result, shear estimates are typically biased low. Independently of the particular form of the estimator, the bias depends on the true intrinsic galaxy morphology, but also on the size and shape of the PSF. As long as the issues discussed here are not solved, the shapelet method cannot provide weak-lensing measurements with an accuracy demanded by upcoming missions and surveys, unless one can provide an accurate and reliable calibration, specific for the dataset under investigation.Comment: 8 pages, 5 figures, submitted to A&

    Two dimensional bulge disk decomposition

    Get PDF
    We propose a two dimensional galaxy fitting algorithm to extract parameters of the bulge, disk, and a central point source from broad band images of galaxies. We use a set of realistic galaxy parameters to construct a large number of model galaxy images which we then use as input to our galaxy fitting program to test it. We find that our approach recovers all structural parameters to a fair degree of accuracy. We elucidate our procedures by extracting parameters for 3 real galaxies -- NGC 661, NGC 1381, and NGC 1427.Comment: 23 pages, LaTeX, AASTEX macros used, 7 Postscript figures, submitted to Ap

    Shapes and Shears, Stars and Smears: Optimal Measurements for Weak Lensing

    Get PDF
    We present the theoretical and analytical bases of optimal techniques to measure weak gravitational shear from images of galaxies. We first characterize the geometric space of shears and ellipticity, then use this geometric interpretation to analyse images. The steps of this analysis include: measurement of object shapes on images, combining measurements of a given galaxy on different images, estimating the underlying shear from an ensemble of galaxy shapes, and compensating for the systematic effects of image distortion, bias from PSF asymmetries, and `"dilution" of the signal by the seeing. These methods minimize the ellipticity measurement noise, provide calculable shear uncertainty estimates, and allow removal of systematic contamination by PSF effects to arbitrary precision. Galaxy images and PSFs are decomposed into a family of orthogonal 2d Gaussian-based functions, making the PSF correction and shape measurement relatively straightforward and computationally efficient. We also discuss sources of noise-induced bias in weak lensing measurements and provide a solution for these and previously identified biases.Comment: Version accepted to AJ. Minor fixes, plus a simpler method of shape weighting. Version with full vector figures available via http://www.astro.lsa.umich.edu/users/garyb/PUBLICATIONS

    Nearly Optimal Private Convolution

    Full text link
    We study computing the convolution of a private input xx with a public input hh, while satisfying the guarantees of (ϵ,δ)(\epsilon, \delta)-differential privacy. Convolution is a fundamental operation, intimately related to Fourier Transforms. In our setting, the private input may represent a time series of sensitive events or a histogram of a database of confidential personal information. Convolution then captures important primitives including linear filtering, which is an essential tool in time series analysis, and aggregation queries on projections of the data. We give a nearly optimal algorithm for computing convolutions while satisfying (ϵ,δ)(\epsilon, \delta)-differential privacy. Surprisingly, we follow the simple strategy of adding independent Laplacian noise to each Fourier coefficient and bounding the privacy loss using the composition theorem of Dwork, Rothblum, and Vadhan. We derive a closed form expression for the optimal noise to add to each Fourier coefficient using convex programming duality. Our algorithm is very efficient -- it is essentially no more computationally expensive than a Fast Fourier Transform. To prove near optimality, we use the recent discrepancy lowerbounds of Muthukrishnan and Nikolov and derive a spectral lower bound using a characterization of discrepancy in terms of determinants
    • …
    corecore