113 research outputs found

    An Empirical Model For Intrinsic Alignments: Insights From Cosmological Simulations

    Full text link
    We extend current models of the halo occupation distribution (HOD) to include a flexible, empirical framework for the forward modeling of the intrinsic alignment (IA) of galaxies. A primary goal of this work is to produce mock galaxy catalogs for the purpose of validating existing models and methods for the mitigation of IA in weak lensing measurements. This technique can also be used to produce new, simulation-based predictions for IA and galaxy clustering. Our model is probabilistically formulated, and rests upon the assumption that the orientations of galaxies exhibit a correlation with their host dark matter (sub)halo orientation or with their position within the halo. We examine the necessary components and phenomenology of such a model by considering the alignments between (sub)halos in a cosmological dark matter only simulation. We then validate this model for a realistic galaxy population in a set of simulations in the Illustris-TNG suite. We create an HOD mock with Illustris-like correlations using our method, constraining the associated IA model parameters, with the χdof2\chi^2_{\rm dof} between our model's correlations and those of Illustris matching as closely as 1.4 and 1.1 for orientation--position and orientation--orientation correlation functions, respectively. By modeling the misalignment between galaxies and their host halo, we show that the 3-dimensional two-point position and orientation correlation functions of simulated (sub)halos and galaxies can be accurately reproduced from quasi-linear scales down to 0.1 h1Mpc0.1~h^{-1}{\rm Mpc}. We also find evidence for environmental influence on IA within a halo. Our publicly-available software provides a key component enabling efficient determination of Bayesian posteriors on IA model parameters using observational measurements of galaxy-orientation correlation functions in the highly nonlinear regime.Comment: 17 pages, 12 figures, 3 tables, for submission to The Open Journal of Astrophysics, code available at https://github.com/astropy/halotool

    PSFs of coadded images

    Full text link
    We provide a detailed exploration of the connection between choice of coaddition schemes and the point-spread function (PSF) of the resulting coadded images. In particular, we investigate what properties of the coaddition algorithm lead to the final coadded image having a well-defined PSF. The key elements of this discussion are as follows: 1. We provide an illustration of how linear coaddition schemes can produce a coadd that lacks a well-defined PSF even for relatively simple scenarios and choices of weight functions. 2. We provide a more formal demonstration of the fact that a linear coadd only has a well-defined PSF in the case that either (a) each input image has the same PSF or (b) the coadd is produced with weights that are independent of the signal. 3. We discuss some reasons that two plausible nonlinear coaddition algorithms (median and clipped-mean) fail to produce a consistent PSF profile for stars. 4. We demonstrate that all nonlinear coaddition procedures fail to produce a well-defined PSF for extended objects. In the end, we conclude that, for any purpose where a well-defined PSF is desired, one should use a linear coaddition scheme with weights that do not correlate with the signal and are approximately uniform across typical objects of interest.Comment: 13 pages, 4 figures; pedagogical article for submission to the Open Journal of Astrophysic

    Impact of Point Spread Function Higher Moments Error on Weak Gravitational Lensing II: A Comprehensive Study

    Full text link
    Weak gravitational lensing, or weak lensing, is one of the most powerful probes for dark matter and dark energy science, although it faces increasing challenges in controlling systematic uncertainties as \edit{the statistical errors become smaller}. The Point Spread Function (PSF) needs to be precisely modeled to avoid systematic error on the weak lensing measurements. The weak lensing biases induced by errors in the PSF model second moments, i.e., its size and shape, are well-studied. However, Zhang et al. (2021) showed that errors in the higher moments of the PSF may also be a significant source of systematics for upcoming weak lensing surveys. Therefore, the goal of this work is to comprehensively investigate the modeling quality of PSF moments from the 3rd3^{\text{rd}} to 6th6^{\text{th}} order, and estimate their impact on cosmological parameter inference. We propagate the \textsc{PSFEx} higher moments modeling error in the HSC survey dataset to the weak lensing \edit{shear-shear correlation functions} and their cosmological analyses. We find that the overall multiplicative shear bias associated with errors in PSF higher moments can cause a 0.1σ\sim 0.1 \sigma shift on the cosmological parameters for LSST Y10. PSF higher moment errors also cause additive biases in the weak lensing shear, which, if not accounted for in the cosmological parameter analysis, can induce cosmological parameter biases comparable to their 1σ1\sigma uncertainties for LSST Y10. We compare the \textsc{PSFEx} model with PSF in Full FOV (\textsc{Piff}), and find similar performance in modeling the PSF higher moments. We conclude that PSF higher moment errors of the future PSF models should be reduced from those in current methods to avoid a need to explicitly model these effects in the weak lensing analysis.Comment: 24 pages, 17 figures, 3 tables; Submitted to MNRAS; Comments welcome

    Cosmological parameter constraints from galaxy–galaxy lensing and galaxy clustering with the SDSS DR7

    Get PDF
    Recent studies have shown that the cross-correlation coefficient between galaxies and dark matter is very close to unity on scales outside a few virial radii of galaxy haloes, independent of the details of how galaxies populate dark matter haloes. This finding makes it possible to determine the dark matter clustering from measurements of galaxy–galaxy weak lensing and galaxy clustering. We present new cosmological parameter constraints based on large-scale measurements of spectroscopic galaxy samples from the Sloan Digital Sky Survey (SDSS) data release 7. We generalize the approach of Baldauf et al. to remove small-scale information (below 2 and 4 h^(−1) Mpc for lensing and clustering measurements, respectively), where the cross-correlation coefficient differs from unity. We derive constraints for three galaxy samples covering 7131 deg^2, containing 69 150, 62 150 and 35 088 galaxies with mean redshifts of 0.11, 0.28 and 0.40. We clearly detect scale-dependent galaxy bias for the more luminous galaxy samples, at a level consistent with theoretical expectations. When we vary both σ_8 and Ω_m (and marginalize over non-linear galaxy bias) in a flat Λ cold dark matter model, the best-constrained quantity is σ_8(Ω_m/0.25)^(0.57) = 0.80 ± 0.05 (1σ, stat. + sys.), where statistical and systematic errors (photometric redshift and shear calibration) have comparable contributions, and we have fixed n_s = 0.96 and h = 0.7. These strong constraints on the matter clustering suggest that this method is competitive with cosmic shear in current data, while having very complementary and in some ways less serious systematics. We therefore expect that this method will play a prominent role in future weak lensing surveys. When we combine these data with Wilkinson Microwave Anisotropy Probe 7-year (WMAP7) cosmic microwave background (CMB) data, constraints on σ_8, Ω_m, H_0, w_(de) and ∑m_ν become 30–80 per cent tighter than with CMB data alone, since our data break several parameter degeneracies

    Weighing the Giants - I. Weak-lensing masses for 51 massive galaxy clusters: project overview, data analysis methods and cluster images

    Full text link
    This is the first in a series of papers in which we measure accurate weak-lensing masses for 51 of the most X-ray luminous galaxy clusters known at redshifts 0.15<z<0.7, in order to calibrate X-ray and other mass proxies for cosmological cluster experiments. The primary aim is to improve the absolute mass calibration of cluster observables, currently the dominant systematic uncertainty for cluster count experiments. Key elements of this work are the rigorous quantification of systematic uncertainties, high-quality data reduction and photometric calibration, and the "blind" nature of the analysis to avoid confirmation bias. Our target clusters are drawn from RASS X-ray catalogs, and provide a versatile calibration sample for many aspects of cluster cosmology. We have acquired wide-field, high-quality imaging using the Subaru and CFHT telescopes for all 51 clusters, in at least three bands per cluster. For a subset of 27 clusters, we have data in at least five bands, allowing accurate photo-z estimates of lensed galaxies. In this paper, we describe the cluster sample and observations, and detail the processing of the SuprimeCam data to yield high-quality images suitable for robust weak-lensing shape measurements and precision photometry. For each cluster, we present wide-field color optical images and maps of the weak-lensing mass distribution, the optical light distribution, and the X-ray emission, providing insights into the large-scale structure in which the clusters are embedded. We measure the offsets between X-ray centroids and Brightest Cluster Galaxies in the clusters, finding these to be small in general, with a median of 20kpc. For offsets <100kpc, weak-lensing mass measurements centered on the BCGs agree well with values determined relative to the X-ray centroids; miscentering is therefore not a significant source of systematic uncertainty for our mass measurements. [abridged]Comment: 26 pages, 19 figures (Appendix C not included). Accepted after minor revisio

    A Joint Roman Space Telescope and Rubin Observatory Synthetic Wide-Field Imaging Survey

    Full text link
    We present and validate 20 deg2^2 of overlapping synthetic imaging surveys representing the full depth of the Nancy Grace Roman Space Telescope High-Latitude Imaging Survey (HLIS) and five years of observations of the Vera C. Rubin Observatory Legacy Survey of Space and Time (LSST). The two synthetic surveys are summarized, with reference to the existing 300 deg2^2 of LSST simulated imaging produced as part of Dark Energy Science Collaboration (DESC) Data Challenge 2 (DC2). Both synthetic surveys observe the same simulated DESC DC2 universe. For the synthetic Roman survey, we simulate for the first time fully chromatic images along with the detailed physics of the Sensor Chip Assemblies derived from lab measurements using the flight detectors. The simulated imaging and resulting pixel-level measurements of photometric properties of objects span a wavelength range of \sim0.3 to 2.0 μ\mum. We also describe updates to the Roman simulation pipeline, changes in how astrophysical objects are simulated relative to the original DC2 simulations, and the resulting simulated Roman data products. We use these simulations to explore the relative fraction of unrecognized blends in LSST images, finding that 20-30% of objects identified in LSST images with ii-band magnitudes brighter than 25 can be identified as multiple objects in Roman images. These simulations provide a unique testing ground for the development and validation of joint pixel-level analysis techniques of ground- and space-based imaging data sets in the second half of the 2020s -- in particular the case of joint Roman--LSST analyses

    CFHTLenS: a Gaussian likelihood is a sufficient approximation for a cosmological analysis of third-order cosmic shear statistics

    Get PDF
    We study the correlations of the shear signal between triplets of sources in the Canada–France–Hawaii Telescope Lensing Survey (CFHTLenS) to probe cosmological parameters via the matter bispectrum. In contrast to previous studies, we adopt a non-Gaussian model of the data likelihood which is supported by our simulations of the survey. We find that for state-of-the-art surveys, similar to CFHTLenS, a Gaussian likelihood analysis is a reasonable approximation, albeit small differences in the parameter constraints are already visible. For future surveys we expect that a Gaussian model becomes inaccurate. Our algorithm for a refined non-Gaussian analysis and data compression is then of great utility especially because it is not much more elaborate if simulated data are available. Applying this algorithm to the third-order correlations of shear alone in a blind analysis, we find a good agreement with the standard cosmological model: Σ8=σ8(Ωm/0.27)0.64=0.79+0.08−0.11 for a flat Λ cold dark matter cosmology with h = 0.7 ± 0.04 (68 per cent credible interval). Nevertheless our models provide only moderately good fits as indicated by χ2/dof = 2.9, including a 20 per cent rms uncertainty in the predicted signal amplitude. The models cannot explain a signal drop on scales around 15 arcmin, which may be caused by systematics. It is unclear whether the discrepancy can be fully explained by residual point spread function systematics of which we find evidence at least on scales of a few arcmin. Therefore we need a better understanding of higher order correlations of cosmic shear and their systematics to confidently apply them as cosmological probes

    The LSST Dark Energy Science Collaboration (DESC) Science Requirements Document

    Full text link
    The Large Synoptic Survey Telescope (LSST) Dark Energy Science Collaboration (DESC) will use five cosmological probes: galaxy clusters, large scale structure, supernovae, strong lensing, and weak lensing. This Science Requirements Document (SRD) quantifies the expected dark energy constraining power of these probes individually and together, with conservative assumptions about analysis methodology and follow-up observational resources based on our current understanding and the expected evolution within the field in the coming years. We then define requirements on analysis pipelines that will enable us to achieve our goal of carrying out a dark energy analysis consistent with the Dark Energy Task Force definition of a Stage IV dark energy experiment. This is achieved through a forecasting process that incorporates the flowdown to detailed requirements on multiple sources of systematic uncertainty. Future versions of this document will include evolution in our software capabilities and analysis plans along with updates to the LSST survey strategy.Comment: 32 pages + 60 pages of appendices. This is v1 of the DESC SRD, an internal collaboration document that is being made public and is not planned for submission to a journal. Data products for reproducing key plots are available at the LSST DESC Zenodo community, https://zenodo.org/communities/lsst-desc; see "Executive Summary and User Guide" for instructions on how to use and cite those product
    corecore