3,476 research outputs found

    On Point Spread Function modelling: towards optimal interpolation

    Get PDF
    Point Spread Function (PSF) modeling is a central part of any astronomy data analysis relying on measuring the shapes of objects. It is especially crucial for weak gravitational lensing, in order to beat down systematics and allow one to reach the full potential of weak lensing in measuring dark energy. A PSF modeling pipeline is made of two main steps: the first one is to assess its shape on stars, and the second is to interpolate it at any desired position (usually galaxies). We focus on the second part, and compare different interpolation schemes, including polynomial interpolation, radial basis functions, Delaunay triangulation and Kriging. For that purpose, we develop simulations of PSF fields, in which stars are built from a set of basis functions defined from a Principal Components Analysis of a real ground-based image. We find that Kriging gives the most reliable interpolation, significantly better than the traditionally used polynomial interpolation. We also note that although a Kriging interpolation on individual images is enough to control systematics at the level necessary for current weak lensing surveys, more elaborate techniques will have to be developed to reach future ambitious surveys' requirements.Comment: Accepted for publication in MNRA

    Perturbative reconstruction of a gravitational lens: when mass does not follow light

    Full text link
    The structure and potential of a complex gravitational lens is reconstructed using the perturbative method presented in Alard 2007, MNRAS, 382L, 58; Alard 2008, MNRAS, 388, 375. This lens is composed of 6 galaxies belonging to a small group. The lens inversion is reduced to the problem of reconstructing non-degenerate quantities: the 2 fields of the perturbative theory of strong gravitational lenses. Since in the perturbative theory the circular source solution is analytical, the general properties of the perturbative solution can be inferred directly from the data. As a consequence, the reconstruction of the perturbative fields is not affected by degeneracy, and finding the best solution is only a matter of numerical refinement. The local shape of the potential and density of the lens are inferred from the perturbative solution, revealing the existence of an independent dark component that does not follow light. The most likely explanation is that the particular shape of the dark halo is due to the merging of cold dark matter halos. This is a new result illustrating the structure of dark halos at the scale of galaxies.Comment: Final version (Astronomy and Astrophysics in press

    Data Reduction Pipeline for the CHARIS Integral-Field Spectrograph I: Detector Readout Calibration and Data Cube Extraction

    Get PDF
    We present the data reduction pipeline for CHARIS, a high-contrast integral-field spectrograph for the Subaru Telescope. The pipeline constructs a ramp from the raw reads using the measured nonlinear pixel response, and reconstructs the data cube using one of three extraction algorithms: aperture photometry, optimal extraction, or χ2\chi^2 fitting. We measure and apply both a detector flatfield and a lenslet flatfield and reconstruct the wavelength- and position-dependent lenslet point-spread function (PSF) from images taken with a tunable laser. We use these measured PSFs to implement a χ2\chi^2-based extraction of the data cube, with typical residuals of ~5% due to imperfect models of the undersampled lenslet PSFs. The full two-dimensional residual of the χ2\chi^2 extraction allows us to model and remove correlated read noise, dramatically improving CHARIS' performance. The χ2\chi^2 extraction produces a data cube that has been deconvolved with the line-spread function, and never performs any interpolations of either the data or the individual lenslet spectra. The extracted data cube also includes uncertainties for each spatial and spectral measurement. CHARIS' software is parallelized, written in Python and Cython, and freely available on github with a separate documentation page. Astrometric and spectrophotometric calibrations of the data cubes and PSF subtraction will be treated in a forthcoming paper.Comment: 18 pages, 15 figures, 3 tables, replaced with JATIS accepted version (emulateapj formatted here). Software at https://github.com/PrincetonUniversity/charis-dep and documentation at http://princetonuniversity.github.io/charis-de

    The alternating least squares technique for nonuniform intensity color correction

    Get PDF
    Color correction involves mapping device RGBs to display counterparts or to corresponding XYZs. A popular methodology is to take an image of a color chart and then solve for the best 3 × 3 matrix that maps the RGBs to the corresponding known XYZs. However, this approach fails at times when the intensity of the light varies across the chart. This variation needs to be removed before estimating the correction matrix. This is typically achieved by acquiring an image of a uniform gray chart in the same location, and then dividing the color checker image by the gray-chart image. Of course, taking images of two charts doubles the complexity of color correction. In this article, we present an alternative color correction algorithm that simultaneously estimates the intensity variation and the 3 × 3 transformation matrix from a single image of a color chart. We show that the color correction problem, that is, finding the 3 × 3 correction matrix, can be solved using a simple alternating least-squares procedure. Experiments validate our approach. © 2014 Wiley Periodicals, Inc. Col Res Appl, 40, 232–242, 201

    Polynomial spline-approximation of Clarke's model

    Get PDF
    We investigate polynomial spline approximation of stationary random processes on a uniform grid applied to Clarke's model of time variations of path amplitudes in multipath fading channels with Doppler scattering. The integral mean square error (MSE) for optimal and interpolation splines is presented as a series of spectral moments. The optimal splines outperform the interpolation splines; however, as the sampling factor increases, the optimal and interpolation splines of even order tend to provide the same accuracy. To build such splines, the process to be approximated needs to be known for all time, which is impractical. Local splines, on the other hand, may be used where the process is known only over a finite interval. We first consider local splines with quasioptimal spline coefficients. Then, we derive optimal spline coefficients and investigate the error for different sets of samples used for calculating the spline coefficients. In practice, approximation with a low processing delay is of interest; we investigate local spline extrapolation with a zero-processing delay. The results of our investigation show that local spline approximation is attractive for implementation from viewpoints of both low processing delay and small approximation error; the error can be very close to the minimum error provided by optimal splines. Thus, local splines can be effectively used for channel estimation in multipath fast fading channels

    A weak lensing analysis of the Abell 383 cluster

    Full text link
    In this paper we use deep CFHT and SUBARU uBVRIzuBVRIz archival images of the Abell 383 cluster (z=0.187) to estimate its mass by weak lensing. To this end, we first use simulated images to check the accuracy provided by our KSB pipeline. Such simulations include both the STEP 1 and 2 simulations, and more realistic simulations of the distortion of galaxy shapes by a cluster with a Navarro-Frenk-White (NFW) profile. From such simulations we estimate the effect of noise on shear measurement and derive the correction terms. The R-band image is used to derive the mass by fitting the observed tangential shear profile with a NFW mass profile. Photometric redshifts are computed from the uBVRIz catalogs. Different methods for the foreground/background galaxy selection are implemented, namely selection by magnitude, color and photometric redshifts, and results are compared. In particular, we developed a semi-automatic algorithm to select the foreground galaxies in the color-color diagram, based on observed colors. Using color selection or photometric redshifts improves the correction of dilution from foreground galaxies: this leads to higher signals in the inner parts of the cluster. We obtain a cluster mass that is ~ 20% higher than previous estimates, and is more consistent the mass expected from X--ray data. The R-band luminosity function of the cluster is finally computed.Comment: 11 pages, 12 figures. Accepted for publication on Astronomy & Astrophysic
    • …
    corecore