463 research outputs found

    Large Synoptic Survey Telescope: Overview

    Full text link
    A large wide-field telescope and camera with optical throughput over 200 m^2 deg^2 -- a factor of 50 beyond what we currently have -- would enable the detection of faint moving or bursting optical objects: from Earth threatening asteroids to energetic events at the edge of the optical universe. An optimized design for LSST is a 8.4 m telescope with a 3 degree field of view and an optical throughput of 260 m^2 deg^2. With its large throughput and dedicated all-sky monitoring mode, the LSST will reach 24th magnitude in a single 10 second exposure, opening unexplored regions of astronomical parameter space. The heart of the 2.3 Gpixel camera will be an array of imager modules with 10 micron pixels. Once each month LSST will survey up to 14,000 deg^2 of the sky with many ~10 second exposures. Over time LSST will survey 30,000 deg^2 deeply in multiple bandpasses, enabling innovative investigations ranging from galactic structure to cosmology. This is a shift in paradigm for optical astronomy: from "survey follow-up" to "survey direct science." The resulting real-time data products and fifteen petabyte time-tagged imaging database and photometric catalog will provide a unique resource. A collaboration of ~80 engineers and scientists is gearing up to confront this exciting challenge

    Maximizing survey volume for large-area multi-epoch surveys with Voronoi tessellation

    Get PDF
    The survey volume of a proper motion-limited sample is typically much smaller than a magnitude-limited sample. This is because of the noisy astrometric measurements from detectors that are not dedicated for astrometric missions. In order to apply an empirical completeness correction, existing works limit the survey depth to the shallower parts of the sky that hamper the maximum potential of a survey. The number of epoch of measurement is a discrete quantity that cannot be interpolated across the projected plane of observation, so that the survey properties change in discrete steps across the sky. This work proposes a method to dissect the survey into small parts with Voronoi tessellation using candidate objects as generating points such that each part defines a ‘mini-survey’ that has its own properties. Coupling with a maximum volume density estimator, the new method is demonstrated to be unbiased and recovered ∼20 per cent more objects than the existing method in a mock catalogue of a white dwarf-only solar neighbourhood with Pan–STARRS 1-like characteristics. Towards the end of this work, we demonstrate one way to increase the tessellation resolution with artificial generating points, which would be useful for analysis of rare objects with small number counts

    Designing an Optimal LSST Deep Drilling Program for Cosmology with Type Ia Supernovae

    Get PDF
    The Vera C. Rubin Observatory's Legacy Survey of Space and Time is forecast to collect a large sample of Type Ia supernovae (SNe Ia) that could be instrumental in unveiling the nature of Dark Energy. The feat, however, requires measuring the two components of the Hubble diagram - distance modulus and redshift - with a high degree of accuracy. Distance is estimated from SNe Ia parameters extracted from light curve fits, where the average quality of light curves is primarily driven by survey parameters such as the cadence and the number of visits per band. An optimal observing strategy is thus critical for measuring cosmological parameters with high accuracy. We present in this paper a three-stage analysis aiming at quantifying the impact of the Deep Drilling (DD) strategy parameters on three critical aspects of the survey: the redshift completeness (originating from the Malmquist cosmological bias), the number of well-measured SNe Ia, and the cosmological measurements. Analyzing the current LSST survey simulations, we demonstrate that the current DD survey plans are characterized by a low completeness (z ∼z~\sim 0.55-0.65), and irregular and low cadences (few days) that dramatically decrease the size of the well-measured SNe Ia sample. We then propose a modus operandi that provides the number of visits (per band) required to reach higher redshifts. The results of this approach are used to design a set of optimized DD surveys for SNe Ia cosmology. We show that most accurate cosmological measurements are achieved with Deep Rolling surveys characterized by a high cadence (one day), a rolling strategy (each field observed at least two seasons), and two sets of fields: ultra-deep (z≳0.8z \gtrsim 0.8) and deep (z≳0.6z \gtrsim 0.6) fields. We also demonstrate that a deterministic scheduler including a gap recovery mechanism is critical to achieve a high quality DD survey required for SNe Ia cosmology

    The edges of galaxies in the Fornax Cluster: Fifty percent smaller and denser compared to the field

    Full text link
    Physically motivated measurements are crucial for understanding galaxy growth and the role of the environment on their evolution. In particular, the growth of galaxies as measured by their size or radial extent provides an empirical approach for addressing this issue. However, the established definitions of galaxy size used for nearly a century are ill-suited for these studies because of a previously ignored bias. The conventionally-measured radii consistently miss the diffuse, outer extensions of stellar emission which harbour key signatures of galaxy growth, including star formation and gas accretion or removal. This issue is addressed by examining low surface brightness truncations or galaxy "edges" as a physically motivated tracer of size based on star formation thresholds. Our total sample consists of ∼900\sim900 galaxies with stellar masses ranging from 105M⊙<M⋆<1011M⊙10^5 M_{\odot} < M_{\star} < 10^{11} M_{\odot}. This sample of nearby cluster, group satellite and nearly isolated field galaxies was compiled using multi-band imaging from the Fornax Deep Survey, deep IAC Stripe 82 and Dark Energy Camera Legacy Surveys. Across the full mass range studied, we find that compared to the field, the edges of galaxies in the Fornax Cluster are located at 50% smaller radii and the average stellar surface density at the edges are two times higher. These results are consistent with the rapid removal of loosely bound neutral hydrogen (HI) in hot, crowded environments which truncates galaxies outside-in earlier, preventing the formation of more extended sizes and lower density edges. In fact, we find that galaxies with lower HI fractions have edges with higher stellar surface density. Our results highlight the importance of deep imaging surveys to study the low surface brightness imprints of the large scale structure and environment on galaxy evolution.Comment: 22 pages, 12 figures, 2 tables, submitted to A&A after LSST DESC internal and collaboration wide review (see acknowledgements). Example galaxies in Figs. 2, 5 and 6. Key results in Figs. 7, 8, 11 and 1

    Fat cosmic ray tracks in charge-coupled devices

    Full text link
    Cosmic rays are particles from the upper atmosphere which often leave bright spots and trails in images from telescope CCDs. We investigate so-called ``fat" cosmic rays seen in images from Vera C. Rubin Observatory and the Subaru Telescope. These tracks are much wider and brighter than typical cosmic ray tracks, and therefore are more capable of obscuring data in science images. By understanding the origins of these tracks, we can better ensure that they do not interfere with on-sky data. We compare the properties of these tracks to simulated and theoretical models in order to identify both the particles causing these tracks as well as the reason for their excess spread. We propose that the origin of these tracks is cosmic ray protons, which deposit much greater charge in the CCDs than typical cosmic rays due to their lower velocities. The generated charges then repel each other while drifting through the detector, resulting in a track which is much wider than typical tracks.Comment: 13 pages, 7 figures. Accepted to JATI

    Large-k Limit of Multi-Point Propagators in the RG Formalism

    Full text link
    Renormalized versions of cosmological perturbation theory have been very successful in recent years in describing the evolution of structure formation in the weakly non-linear regime. The concept of multi-point propagators has been introduced as a tool to quantify the relation between the initial matter distribution and the final one and to push the validity of the approaches to smaller scales. We generalize the n-point propagators that have been considered until now to include a new class of multi-point propagators that are relevant in the framework of the renormalization group formalism. The large-k results obtained for this general class of multi-point propagators match the results obtained earlier both in the case of Gaussian and non-Gaussian initial conditions. We discuss how the large-k results can be used to improve on the accuracy of the calculations of the power spectrum and bispectrum in the presence of initial non-Gaussianities.Comment: 30 page

    DESCQA: An Automated Validation Framework for Synthetic Sky Catalogs

    Get PDF
    The use of high-quality simulated sky catalogs is essential for the success of cosmological surveys. The catalogs have diverse applications, such as investigating signatures of fundamental physics in cosmological observables, understanding the effect of systematic uncertainties on measured signals and testing mitigation strategies for reducing these uncertainties, aiding analysis pipeline development and testing, and survey strategy optimization. The list of applications is growing with improvements in the quality of the catalogs and the details that they can provide. Given the importance of simulated catalogs, it is critical to provide rigorous validation protocols that enable both catalog providers and users to assess the quality of the catalogs in a straightforward and comprehensive way. For this purpose, we have developed the DESCQA framework for the Large Synoptic Survey Telescope Dark Energy Science Collaboration as well as for the broader community. The goal of DESCQA is to enable the inspection, validation, and comparison of an inhomogeneous set of synthetic catalogs via the provision of a common interface within an automated framework. In this paper, we present the design concept and first implementation of DESCQA. In order to establish and demonstrate its full functionality we use a set of interim catalogs and validation tests. We highlight several important aspects, both technical and scientific, that require thoughtful consideration when designing a validation framework, including validation metrics and how these metrics impose requirements on the synthetic sky catalogs.La lista completa de autores se encuentra en el documento.Instituto de Astrofísica de La PlataFacultad de Ciencias Astronómicas y Geofísica

    The High Redshift Integrated Sachs-Wolfe Effect

    Full text link
    In this paper we rely on the quasar (QSO) catalog of the Sloan Digital Sky Survey Data Release Six (SDSS DR6) of about one million photometrically selected QSOs to compute the Integrated Sachs-Wolfe (ISW) effect at high redshift, aiming at constraining the behavior of the expansion rate and thus the behaviour of dark energy at those epochs. This unique sample significantly extends previous catalogs to higher redshifts while retaining high efficiency in the selection algorithm. We compute the auto-correlation function (ACF) of QSO number density from which we extract the bias and the stellar contamination. We then calculate the cross-correlation function (CCF) between QSO number density and Cosmic Microwave Background (CMB) temperature fluctuations in different subsamples: at high z>1.5 and low z<1.5 redshifts and for two different choices of QSO in a conservative and in a more speculative analysis. We find an overall evidence for a cross-correlation different from zero at the 2.7\sigma level, while this evidence drops to 1.5\sigma at z>1.5. We focus on the capabilities of the ISW to constrain the behaviour of the dark energy component at high redshift both in the \LambdaCDM and Early Dark Energy cosmologies, when the dark energy is substantially unconstrained by observations. At present, the inclusion of the ISW data results in a poor improvement compared to the obtained constraints from other cosmological datasets. We study the capabilities of future high-redshift QSO survey and find that the ISW signal can improve the constraints on the most important cosmological parameters derived from Planck CMB data, including the high redshift dark energy abundance, by a factor \sim 1.5.Comment: 20 pages, 18 figures, and 7 table

    Galaxy blending effects in deep imaging cosmic shear probes of cosmology

    Full text link
    Upcoming deep imaging surveys such as the Vera C. Rubin Observatory Legacy Survey of Space and Time will be confronted with challenges that come with increased depth. One of the leading systematic errors in deep surveys is the blending of objects due to higher surface density in the more crowded images; a considerable fraction of the galaxies which we hope to use for cosmology analyses will overlap each other on the observed sky. In order to investigate these challenges, we emulate blending in a mock catalogue consisting of galaxies at a depth equivalent to 1.3 years of the full 10-year Rubin Observatory that includes effects due to weak lensing, ground-based seeing, and the uncertainties due to extraction of catalogues from imaging data. The emulated catalogue indicates that approximately 12% of the observed galaxies are "unrecognized" blends that contain two or more objects but are detected as one. Using the positions and shears of half a billion distant galaxies, we compute shear-shear correlation functions after selecting tomographic samples in terms of both spectroscopic and photometric redshift bins. We examine the sensitivity of the cosmological parameter estimation to unrecognized blending employing both jackknife and analytical Gaussian covariance estimators. A ∼0.02\sim0.02 decrease in the derived structure growth parameter S8=σ8(Ωm/0.3)0.5S_8 = \sigma_8 (\Omega_{\rm m}/0.3)^{0.5} is seen due to unrecognized blending in both tomographies with a slight additional bias for the photo-zz-based tomography. This bias is about 2σ\sigma statistical error in measuring S8S_8.Comment: 22 pages, 23 figures. This paper has undergone internal review in the LSST DESC. Submitted to MNRA
    • …
    corecore