44 research outputs found

    A review of elliptical and disc galaxy structure, and modern scaling laws

    Full text link
    A century ago, in 1911 and 1913, Plummer and then Reynolds introduced their models to describe the radial distribution of stars in `nebulae'. This article reviews the progress since then, providing both an historical perspective and a contemporary review of the stellar structure of bulges, discs and elliptical galaxies. The quantification of galaxy nuclei, such as central mass deficits and excess nuclear light, plus the structure of dark matter halos and cD galaxy envelopes, are discussed. Issues pertaining to spiral galaxies including dust, bulge-to-disc ratios, bulgeless galaxies, bars and the identification of pseudobulges are also reviewed. An array of modern scaling relations involving sizes, luminosities, surface brightnesses and stellar concentrations are presented, many of which are shown to be curved. These 'redshift zero' relations not only quantify the behavior and nature of galaxies in the Universe today, but are the modern benchmark for evolutionary studies of galaxies, whether based on observations, N-body-simulations or semi-analytical modelling. For example, it is shown that some of the recently discovered compact elliptical galaxies at 1.5 < z < 2.5 may be the bulges of modern disc galaxies.Comment: Condensed version (due to Contract) of an invited review article to appear in "Planets, Stars and Stellar Systems"(www.springer.com/astronomy/book/978-90-481-8818-5). 500+ references incl. many somewhat forgotten, pioneer papers. Original submission to Springer: 07-June-201

    An update of the Worldwide Integrated Assessment (WIA) on systemic insecticides. Part 2: impacts on organisms and ecosystems

    Get PDF
    New information on the lethal and sublethal effects of neonicotinoids and fipronil on organisms is presented in this review, complementing the previous WIA in 2015. The high toxicity of these systemic insecticides to invertebrates has been confirmed and expanded to include more species and compounds. Most of the recent research has focused on bees and the sublethal and ecological impacts these insecticides have on pollinators. Toxic effects on other invertebrate taxa also covered predatory and parasitoid natural enemies and aquatic arthropods. Little, while not much new information has been gathered on soil organisms. The impact on marine coastal ecosystems is still largely uncharted. The chronic lethality of neonicotinoids to insects and crustaceans, and the strengthened evidence that these chemicals also impair the immune system and reproduction, highlights the dangers of this particular insecticidal classneonicotinoids and fipronil. , withContinued large scale – mostly prophylactic – use of these persistent organochlorine pesticides has the potential to greatly decreasecompletely eliminate populations of arthropods in both terrestrial and aquatic environments. Sublethal effects on fish, reptiles, frogs, birds and mammals are also reported, showing a better understanding of the mechanisms of toxicity of these insecticides in vertebrates, and their deleterious impacts on growth, reproduction and neurobehaviour of most of the species tested. This review concludes with a summary of impacts on the ecosystem services and functioning, particularly on pollination, soil biota and aquatic invertebrate communities, thus reinforcing the previous WIA conclusions (van der Sluijs et al. 2015)

    Euclid preparation: VI. Verifying the performance of cosmic shear experiments

    Get PDF
    Our aim is to quantify the impact of systematic effects on the inference of cosmological parameters from cosmic shear. We present an end-to-end approach that introduces sources of bias in a modelled weak lensing survey on a galaxy-by-galaxy level. Residual biases are propagated through a pipeline from galaxy properties (one end) through to cosmic shear power spectra and cosmological parameter estimates (the other end), to quantify how imperfect knowledge of the pipeline changes the maximum likelihood values of dark energy parameters. We quantify the impact of an imperfect correction for charge transfer inefficiency (CTI) and modelling uncertainties of the point spread function (PSF) for Euclid, and find that the biases introduced can be corrected to acceptable levels

    Euclid: The reduced shear approximation and magnification bias for Stage IV cosmic shear experiments

    Get PDF
    Context: Stage IV weak lensing experiments will offer more than an order of magnitude leap in precision. We must therefore ensure that our analyses remain accurate in this new era. Accordingly, previously ignored systematic effects must be addressed. / Aims: In this work, we evaluate the impact of the reduced shear approximation and magnification bias on information obtained from the angular power spectrum. To first-order, the statistics of reduced shear, a combination of shear and convergence, are taken to be equal to those of shear. However, this approximation can induce a bias in the cosmological parameters that can no longer be neglected. A separate bias arises from the statistics of shear being altered by the preferential selection of galaxies and the dilution of their surface densities in high-magnification regions. / Methods: The corrections for these systematic effects take similar forms, allowing them to be treated together. We calculated the impact of neglecting these effects on the cosmological parameters that would be determined from Euclid, using cosmic shear tomography. To do so, we employed the Fisher matrix formalism, and included the impact of the super-sample covariance. We also demonstrate how the reduced shear correction can be calculated using a lognormal field forward modelling approach. / Results: These effects cause significant biases in Ωm, σ8, ns, ΩDE, w0, and wa of −0.53σ, 0.43σ, −0.34σ, 1.36σ, −0.68σ, and 1.21σ, respectively. We then show that these lensing biases interact with another systematic effect: the intrinsic alignment of galaxies. Accordingly, we have developed the formalism for an intrinsic alignment-enhanced lensing bias correction. Applying this to Euclid, we find that the additional terms introduced by this correction are sub-dominant

    Euclid: The importance of galaxy clustering and weak lensing cross-correlations within the photometric Euclid survey

    Get PDF
    Context. The data from the Euclid mission will enable the measurement of the angular positions and weak lensing shapes of over a billion galaxies, with their photometric redshifts obtained together with ground-based observations. This large dataset, with well-controlled systematic effects, will allow for cosmological analyses using the angular clustering of galaxies (GCph) and cosmic shear (WL). For Euclid, these two cosmological probes will not be independent because they will probe the same volume of the Universe. The cross-correlation (XC) between these probes can tighten constraints and is therefore important to quantify their impact for Euclid. Aims. In this study, we therefore extend the recently published Euclid forecasts by carefully quantifying the impact of XC not only on the final parameter constraints for different cosmological models, but also on the nuisance parameters. In particular, we aim to decipher the amount of additional information that XC can provide for parameters encoding systematic effects, such as galaxy bias, intrinsic alignments (IAs), and knowledge of the redshift distributions. Methods. We follow the Fisher matrix formalism and make use of previously validated codes. We also investigate a different galaxy bias model, which was obtained from the Flagship simulation, and additional photometric-redshift uncertainties; we also elucidate the impact of including the XC terms on constraining these latter. Results. Starting with a baseline model, we show that the XC terms reduce the uncertainties on galaxy bias by ∼17% and the uncertainties on IA by a factor of about four. The XC terms also help in constraining the γ parameter for minimal modified gravity models. Concerning galaxy bias, we observe that the role of the XC terms on the final parameter constraints is qualitatively the same irrespective of the specific galaxy-bias model used. For IA, we show that the XC terms can help in distinguishing between different models, and that if IA terms are neglected then this can lead to significant biases on the cosmological parameters. Finally, we show that the XC terms can lead to a better determination of the mean of the photometric galaxy distributions. Conclusions. We find that the XC between GCph and WL within the Euclid survey is necessary to extract the full information content from the data in future analyses. These terms help in better constraining the cosmological model, and also lead to a better understanding of the systematic effects that contaminate these probes. Furthermore, we find that XC significantly helps in constraining the mean of the photometric-redshift distributions, but, at the same time, it requires more precise knowledge of this mean with respect to single probes in order not to degrade the final “figure of merit”

    Euclid preparation: XXXI. The effect of the variations in photometric passbands on photometric-redshift accuracy

    Get PDF
    The technique of photometric redshifts has become essential for the exploitation of multi-band extragalactic surveys. While the requirements on photometric redshifts for the study of galaxy evolution mostly pertain to the precision and to the fraction of outliers, the most stringent requirement in their use in cosmology is on the accuracy, with a level of bias at the sub-percent level for the Euclid cosmology mission. A separate, and challenging, calibration process is needed to control the bias at this level of accuracy. The bias in photometric redshifts has several distinct origins that may not always be easily overcome. We identify here one source of bias linked to the spatial or time variability of the passbands used to determine the photometric colours of galaxies. We first quantified the effect as observed on several well-known photometric cameras, and found in particular that, due to the properties of optical filters, the redshifts of off-axis sources are usually overestimated. We show using simple simulations that the detailed and complex changes in the shape can be mostly ignored and that it is suficient to know the mean wavelength of the passbands of each photometric observation to correct almost exactly for this bias; the key point is that this mean wavelength is independent of the spectral energy distribution of the source. We use this property to propose a correction that can be computationally eficiently implemented in some photometric-redshift algorithms, in particular template-fitting. We verified that our algorithm, implemented in the new photometric-redshift code Phosphoros, can effectively reduce the bias in photometric redshifts on real data using the CFHTLS T007 survey, with an average measured bias Δz over the redshift range 0:4 ≤ z ≤ 0:7 decreasing by about 0.02, specifically from Δz ≈ 0:04 to Δz ≈ 0:02 around z = 0:5. Our algorithm is also able to produce corrected photometry for other applications

    Euclid preparation: VII. Forecast validation for Euclid cosmological probes

    Get PDF
    Aims: The Euclid space telescope will measure the shapes and redshifts of galaxies to reconstruct the expansion history of the Universe and the growth of cosmic structures. The estimation of the expected performance of the experiment, in terms of predicted constraints on cosmological parameters, has so far relied on various individual methodologies and numerical implementations, which were developed for different observational probes and for the combination thereof. In this paper we present validated forecasts, which combine both theoretical and observational ingredients for different cosmological probes. This work is presented to provide the community with reliable numerical codes and methods for Euclid cosmological forecasts. / Methods: We describe in detail the methods adopted for Fisher matrix forecasts, which were applied to galaxy clustering, weak lensing, and the combination thereof. We estimated the required accuracy for Euclid forecasts and outline a methodology for their development. We then compare and improve different numerical implementations, reaching uncertainties on the errors of cosmological parameters that are less than the required precision in all cases. Furthermore, we provide details on the validated implementations, some of which are made publicly available, in different programming languages, together with a reference training-set of input and output matrices for a set of specific models. These can be used by the reader to validate their own implementations if required. / Results: We present new cosmological forecasts for Euclid. We find that results depend on the specific cosmological model and remaining freedom in each setting, for example flat or non-flat spatial cosmologies, or different cuts at non-linear scales. The numerical implementations are now reliable for these settings. We present the results for an optimistic and a pessimistic choice for these types of settings. We demonstrate that the impact of cross-correlations is particularly relevant for models beyond a cosmological constant and may allow us to increase the dark energy figure of merit by at least a factor of three

    Euclid: Validation of the MontePython forecasting tools

    Get PDF
    Context. The Euclid mission of the European Space Agency will perform a survey of weak lensing cosmic shear and galaxy clustering in order to constrain cosmological models and fundamental physics. Aims. We expand and adjust the mock Euclid likelihoods of the MontePython software in order to match the exact recipes used in previous Euclid Fisher matrix forecasts for several probes: weak lensing cosmic shear, photometric galaxy clustering, the crosscorrelation between the latter observables, and spectroscopic galaxy clustering.We also establish which precision settings are required when running the Einstein-Boltzmann solvers CLASS and CAMB in the context of Euclid. Methods. For the minimal cosmological model, extended to include dynamical dark energy, we perform Fisher matrix forecasts based directly on a numerical evaluation of second derivatives of the likelihood with respect to model parameters. We compare our results with those of previously validated Fisher codes using an independent method based on first derivatives of the Euclid observables. Results. We show that such MontePython forecasts agree very well with previous Fisher forecasts published by the Euclid Collaboration, and also, with new forecasts produced by the CosmicFish code, now interfaced directly with the two Einstein-Boltzmann solvers CAMB and CLASS. Moreover, to establish the validity of the Gaussian approximation, we show that the Fisher matrix marginal error contours coincide with the credible regions obtained when running Monte Carlo Markov chains with MontePython while using the exact same mock likelihoods. Conclusions. The new Euclid forecast pipelines presented here are ready for use with additional cosmological parameters, in order to explore extended cosmological models

    Euclid preparation XLII. A unified catalogue-level reanalysis of weak lensing by galaxy clusters in five imaging surveys

    Get PDF
    Precise and accurate mass calibration is required to exploit galaxy clusters as astrophysical and cosmological probes in the Euclid era. Systematic errors in lensing signals by galaxy clusters can be empirically estimated by comparing different surveys with independent and uncorrelated systematics. To assess the robustness of the lensing results to systematic errors, we carried out end-to-end tests across different data sets. We performed a unified analysis at the catalogue level by leveraging the Euclid combined cluster and weak-lensing pipeline (COMB-CL). Notably, COMB-CL will measure weak lensing cluster masses for the Euclid Survey. Heterogeneous data sets from five recent, independent lensing surveys (CHFTLenS, DES SV1, HSC-SSP S16a, KiDS DR4, and RCSLenS), which exploited different shear and photometric redshift estimation algorithms, were analysed with a consistent pipeline under the same model assumptions. We performed a comparison of the amplitude of the reduced excess surface density and of the mass estimates using lenses from the Planck PSZ2 and SDSS redMaPPer cluster samples. Mass estimates agree with the results in the literature collected in the LC2 catalogues. Mass accuracy was further investigated considering the AMICO-detected clusters in the HSC-SSP XXL-North field. The consistency of the data sets was tested using our unified analysis framework. We found agreement between independent surveys at the level of systematic noise in Stage-III surveys or precursors. This indicates successful control over systematics. If this control continues into Stage IV, Euclid will be able to measure the weak lensing masses of around 13 000 (considering shot noise only) or 3000 (noise from shape and large-scale-structure) massive clusters with a signal-to-noise ratio greater than three

    Euclid: Covariance of weak lensing pseudo-C_ell estimates. Calculation, comparison to simulations, and dependence on survey geometry

    Get PDF
    An accurate covariance matrix is essential for obtaining reliable cosmological results when using a Gaussian likelihood. In this paper we study the covariance of pseudo-C_ estimates of tomographic cosmic shear power spectra. Using two existing publicly available codes in combination, we calculate the full covariance matrix, including mode-coupling contributions arising from both partial sky coverage and non-linear structure growth. For three different sky masks, we compare the theoretical covariance matrix to that estimated from publicly available N-body weak lensing simulations, finding good agreement. We find that as a more extreme sky cut is applied, a corresponding increase in both Gaussian off-diagonal covariance and non-Gaussian super-sample covariance is observed in both theory and simulations, in accordance with expectations. Studying the different contributions to the covariance in detail, we find that the Gaussian covariance dominates along the main diagonal and the closest off-diagonals, but further away from the main diagonal the super-sample covariance is dominant. Forming mock constraints in parameters describing matter clustering and dark energy, we find that neglecting non-Gaussian contributions to the covariance can lead to underestimating the true size of confidence regions by up to 70 per cent. The dominant non-Gaussian covariance component is the super-sample covariance, but neglecting the smaller connected non-Gaussian covariance can still lead to the underestimation of uncertainties by 10--20 per cent. A real cosmological analysis will require marginalisation over many nuisance parameters, which will decrease the relative importance of all cosmological contributions to the covariance, so these values should be taken as upper limits on the importance of each component
    corecore