1,543 research outputs found

    Estimation of aquifer lower layer hydraulic conductivity values through base flow hydrograph rising limb analysis

    Get PDF
    The estimation of catchment-averaged aquifer hydraulic conductivity values is usually performed through a base flow recession analysis. Relationships between the first time derivatives of the base flow and the base flow values themselves, derived for small and large values of time, are used for this purpose. However, in the derivation of the short-time equations, an initially fully saturated aquifer without recharge with sudden drawdown is assumed, which occurs very rarely in reality. It is demonstrated that this approach leads to a nonnegligible error in the parameter estimates. A new relationship is derived, valid for the rising limb of a base flow hydrograph, succeeding a long rainless period. Application of this equation leads to accurate estimates of the aquifer lower layer saturated hydraulic conductivity. Further, it has been shown analytically that, if base flow is modeled using the linearized Boussinesq equation, the base flow depends on the effective aquifer depth and the ratio of the saturated hydraulic conductivity to the drainable porosity, not on these three parameters separately. The results of the new short-time expression are consistent with this finding, as opposed to the use of a traditional base flow recession analysis. When base flow is modeled using the nonlinear Boussinesq equation, the new expression can be used, without a second equation for large values of time, to estimate the aquifer lower layer hydraulic conductivity. Overall, the results in this paper suggest that the new methodology outperforms a traditional recession analysis for the estimation of catchment-averaged aquifer hydraulic conductivities

    The Astro-WISE approach to quality control for astronomical data

    Get PDF
    We present a novel approach to quality control during the processing of astronomical data. Quality control in the Astro-WISE Information System is integral to all aspects of data handing and provides transparent access to quality estimators for all stages of data reduction from the raw image to the final catalog. The implementation of quality control mechanisms relies on the core features in this Astro-WISE Environment (AWE): an object-oriented framework, full data lineage, and both forward and backward chaining. Quality control information can be accessed via the command-line awe-prompt and the web-based Quality-WISE service. The quality control system is described and qualified using archive data from the 8-CCD Wide Field Imager (WFI) instrument (http://www.eso.org/lasilla/instruments/wfi/) on the 2.2-m MPG/ESO telescope at La Silla and (pre-)survey data from the 32-CCD OmegaCAM instrument (http://www.astro-wise.org/~omegacam/) on the VST telescope at Paranal.Comment: Accepted for publication in topical issue of Experimental Astronomy on Astro-WISE information syste

    Euclid preparation:XXIV. Calibration of the halo mass function in (?)CDM cosmologies

    Get PDF
    Euclid s photometric galaxy cluster survey has the potential to be a very competitive cosmological probe. The main cosmological probe with observations of clusters is their number count, within which the halo mass function (HMF) is a key theoretical quantity. We present a new calibration of the analytic HMF, at the level of accuracy and precision required for the uncertainty in this quantity to be subdominant with respect to other sources of uncertainty in recovering cosmological parameters from Euclid cluster counts. Our model is calibrated against a suite of N-body simulations using a Bayesian approach taking into account systematic errors arising from numerical effects in the simulation. First, we test the convergence of HMF predictions from different N-body codes, by using initial conditions generated with different orders of Lagrangian Perturbation theory, and adopting different simulation box sizes and mass resolution. Then, we quantify the effect of using different halo finder algorithms, and how the resulting differences propagate to the cosmological constraints. In order to trace the violation of universality in the HMF, we also analyse simulations based on initial conditions characterised by scale-free power spectra with different spectral indexes, assuming both Einsteinde Sitter and standard CDM expansion histories. Based on these results, we construct a fitting function for the HMF that we demonstrate to be sub-percent accurate in reproducing results from 9 different variants of the CDM model including massive neutrinos cosmologies. The calibration systematic uncertainty is largely sub-dominant with respect to the expected precision of future massobservation relations; with the only notable exception of the effect due to the halo finder, that could lead to biased cosmological inference.</p

    Euclid:Forecasts for kk-cut 3×23 \times 2 Point Statistics

    Get PDF
    Modelling uncertainties at small scales, i.e. high kk in the power spectrum P(k)P(k), due to baryonic feedback, nonlinear structure growth and the fact that galaxies are biased tracers poses a significant obstacle to fully leverage the constraining power of the {\it Euclid} wide-field survey. kk-cut cosmic shear has recently been proposed as a method to optimally remove sensitivity to these scales while preserving usable information. In this paper we generalise the kk-cut cosmic shear formalism to 3×23 \times 2 point statistics and estimate the loss of information for different kk-cuts in a 3×23 \times 2 point analysis of the {\it Euclid} data. Extending the Fisher matrix analysis of~\citet{blanchard2019euclid}, we assess the degradation in constraining power for different kk-cuts. We work in the idealised case and assume the galaxy bias is linear, the covariance is Gaussian, while neglecting uncertainties due to photo-z errors and baryonic feedback. We find that taking a kk-cut at 2.6 h Mpc−12.6 \ h \ {\rm Mpc} ^{-1} yields a dark energy Figure of Merit (FOM) of 1018. This is comparable to taking a weak lensing cut at ℓ=5000\ell = 5000 and a galaxy clustering and galaxy-galaxy lensing cut at ℓ=3000\ell = 3000 in a traditional 3×23 \times 2 point analysis. We also find that the fraction of the observed galaxies used in the photometric clustering part of the analysis is one of the main drivers of the FOM. Removing 50% (90%)50 \% \ (90 \%) of the clustering galaxies decreases the FOM by 19% (62%)19 \% \ (62 \%). Given that the FOM depends so heavily on the fraction of galaxies used in the clustering analysis, extensive efforts should be made to handle the real-world systematics present when extending the analysis beyond the luminous red galaxy (LRG) sample

    Euclid preparation: XX. The Complete Calibration of the Color-Redshift Relation survey:LBT observations and data release

    Get PDF
    The Complete Calibration of the Color-Redshift Relation survey (C3R2) is a spectroscopic programme designed to empirically calibrate the galaxy color-redshift relation to the Euclid depth (I_E=24.5), a key ingredient for the success of Stage IV dark energy projects based on weak lensing cosmology. A spectroscopic calibration sample as representative as possible of the galaxies in the Euclid weak lensing sample is being collected, selecting galaxies from a self-organizing map (SOM) representation of the galaxy color space. Here, we present the results of a near-infrared H- and K-bands spectroscopic campaign carried out using the LUCI instruments at the LBT. For a total of 251 galaxies, we present new highly-reliable redshifts in the 1.

    Euclid preparation:XII. Optimizing the photometric sample of the Euclid survey for galaxy clustering and galaxy-galaxy lensing analyses

    Get PDF
    Photometric redshifts (photo-zs) are one of the main ingredients in the analysis of cosmological probes. Their accuracy particularly affects the results of the analyses of galaxy clustering with photometrically selected galaxies (GCph) and weak lensing. In the next decade, space missions such as Euclid will collect precise and accurate photometric measurements for millions of galaxies. These data should be complemented with upcoming ground-based observations to derive precise and accurate photo-zs. In this article we explore how the tomographic redshift binning and depth of ground-based observations will affect the cosmological constraints expected from the Euclid mission. We focus on GCph and extend the study to include galaxy-galaxy lensing (GGL). We add a layer of complexity to the analysis by simulating several realistic photo-z distributions based on the Euclid Consortium Flagship simulation and using a machine learning photo-z algorithm. We then use the Fisher matrix formalism together with these galaxy samples to study the cosmological constraining power as a function of redshift binning, survey depth, and photo-z accuracy. We find that bins with an equal width in redshift provide a higher figure of merit (FoM) than equipopulated bins and that increasing the number of redshift bins from ten to 13 improves the FoM by 35% and 15% for GCph and its combination with GGL, respectively. For GCph, an increase in the survey depth provides a higher FoM. However, when we include faint galaxies beyond the limit of the spectroscopic training data, the resulting FoM decreases because of the spurious photo-zs. When combining GCph and GGL, the number density of the sample, which is set by the survey depth, is the main factor driving the variations in the FoM. Adding galaxies at faint magnitudes and high redshift increases the FoM, even when they are beyond the spectroscopic limit, since the number density increase compensates for the photo-z degradation in this case. We conclude that there is more information that can be extracted beyond the nominal ten tomographic redshift bins of Euclid and that we should be cautious when adding faint galaxies into our sample since they can degrade the cosmological constraints

    Data Validation Beyond Big Data

    Get PDF
    From KiDs to Euclid OU-Ext to Euclid data validation. For the OmegaCAM@VST datahandling we have build and operated the distributed information system Astro-WISE. Astro-WISE was successfully used for the processing of KiDS data and particularly its built in extreme data-lineage facilitated the quality control and re-processing of the data with improved calibrations and improved code. Many of the aspects of the Astro-WISE approach will be applied in the data centric information system being build for the data processing for the Euclid satellite. However, the large amounts of data from Euclid in combination with the required much higher accuracies and danger of plural hidden systematics and biases forces to anticipate a new era beyond the Big data hype: data validation. In popular terms discriminating facts and fakes. I will discuss some new steps towards advanced data validation, such as build in dynamical reference systems in the OU-Ext approach, the validation of and by machine learning, and applying extreme data lineage to trace the roots and dependencies of data products

    Euclid:Calibrating photometric redshifts with spectroscopic cross-correlations

    Get PDF
    Cosmological constraints from key probes of the Euclid imaging survey rely critically on the accurate determination of the true redshift distributions, n(z), of tomographic redshift bins. We determine whether the mean redshift, of ten Euclid tomographic redshift bins can be calibrated to the Euclid target uncertainties of 0.002 (1 +z) via cross-correlation, with spectroscopic samples akin to those from the Baryon Oscillation Spectroscopic Survey (BOSS), Dark Energy Spectroscopic Instrument (DESI), and Euclid s NISP spectroscopic survey. We construct mock Euclid and spectroscopic galaxy samples from the Flagship simulation and measure small-scale clustering redshifts up to redshift z 1.8 with an algorithm that performs well on current galaxy survey data. The clustering measurements are then fitted to two n(z) models: one is the true n(z) with a free mean; the other a Gaussian process modified to be restricted to non-negative values. We show that is measured in each tomographic redshift bin to an accuracy of order 0.01 or better. By measuring the clustering redshifts on subsets of the full Flagship area, we construct scaling relations that allow us to extrapolate the method performance to larger sky areas than are currently available in the mock. For the full expected Euclid, BOSS, and DESI overlap region of approximately 6000 deg2, the uncertainties attainable by clustering redshifts exceeds the Euclid requirement by at least a factor of three for both n(z) models considered, although systematic biases limit the accuracy. Clustering redshifts are an extremely effective method for redshift calibration for Euclid if the sources of systematic biases can be determined and removed, or calibrated out with sufficiently realistic simulations. We outline possible future work, in particular an extension to higher redshifts with quasar reference samples.</p
    • …
    corecore