466 research outputs found

    Diablo Canyon power plant site ecological study annual report; July 1, 1973 - June 30, 1974

    Get PDF
    We completed surveys of 11 permanent subtidal stations, 17 random subtidal stations, 4 permanent intertidal stations, and 29 random intertidal stations during the period. In addition, we conducted studies on the sea otter, Enhydra lutris, herd located between Diablo Cove and Point Buchon, continued the annual count of the mature bed of the bull kelp, Nereocystis luetkeana, within Diablo Cove, and interviewed commercial abalone and sea urchin divers for catch-per-unit-of-effort data. During the year, sea otters moved south into the cove east of Lion Rock and then into Diablo Cove. (107pp.) The commercial abalone fishery showed signs of decline, while the commercial sea urchin fishery continued to expand. Several diving surveys were conducted inside Intake Cove to check on dredging progress; the cove appears to have become a haven for juvenile rockfish (Sebastes). The red abalone temperature tolerance studies were completed at the Department's marine culture laboratory at Granite Canyon

    Infrared spectroscopy of phytochrome and model pigments

    Get PDF
    Fourier-transform infrared difference spectra between the red-absorbing and far-red-absorbing forms of oat phytochrome have been measured in H2O and 2H2O. The difference spectra are compared with infrared spectra of model compounds, i.e. the (5Z,10Z,15Z)- and (5Z,10Z,15E)-isomers of 2,3,7,8,12,13,17,18-octaethyl-bilindion (Et8-bilindion), 2,3-dihydro-2,3,7,8,12,13,17,18-octaethyl-bilindion (H2Et8-bilindion), and protonated H2Et8-bilindion in various solvents. The spectra of the model compounds show that only for the protonated forms can clear differences between the two isomers be detected. Since considerable differences are present between the spectra of Et8-bilindion and H2Et8-bilindion, it is concluded that only the latter compound can serve as a model system of phytochrome. The 2H2O effect on the difference spectrum of phytochrome supports the view that the chromophore in red-absorbing phytochrome is protonated and suggests, in addition, that it is also protonated in far-red-absorbing phytochrome. The spectra show that protonated carboxyl groups are influenced. The small amplitudes in the difference spectra exclude major changes of protein secondary structure

    Euclid: Fast two-point correlation function covariance through linear construction

    Get PDF
    We present a method for fast evaluation of the covariance matrix for a two-point galaxy correlation function (2PCF) measured with the Landy- Szalay estimator. The standard way of evaluating the covariance matrix consists in running the estimator on a large number of mock catalogs, and evaluating their sample covariance. With large random catalog sizes (random-to-data objects'ratio M≫ 1) the computational cost of the standard method is dominated by that of counting the data-random and random-random pairs, while the uncertainty of the estimate is dominated by that of data-data pairs. We present a method called Linear Construction (LC), where the covariance is estimated for small random catalogs with a size of M = 1 and M = 2, and the covariance for arbitrary M is constructed as a linear combination of the two. We show that the LC covariance estimate is unbiased. We validated the method with PINOCCHIO simulations in the range r = 20-200 h-1 Mpc. With M = 50 and with 2 h-1 Mpc bins, the theoretical speedup of the method is a factor of 14. We discuss the impact on the precision matrix and parameter estimation, and present a formula for the covariance of covariance.</p

    Euclid preparation: XXX. Performance assessment of the NISP red grism through spectroscopic simulations for the wide and deep surveys

    Get PDF
    This work focusses on the pilot run of a simulation campaign aimed at investigating the spectroscopic capabilities of the Euclid Near-Infrared Spectrometer and Photometer (NISP), in terms of continuum and emission line detection in the context of galaxy evolutionary studies. To this purpose, we constructed, emulated, and analysed the spectra of 4992 star-forming galaxies at 0:3 ≄ z ≄ 2:5 using the NISP pixel-level simulator. We built the spectral library starting from public multi-wavelength galaxy catalogues, with value-added information on spectral energy distribution (SED) fitting results, and stellar population templates from Bruzual & Charlot (2003, MNRAS, 344, 1000). Rest-frame optical and near-IR nebular emission lines were included using empirical and theoretical relations. Dust attenuation was treated using the Calzetti extinction law accounting for the differential attenuation in line-emitting regions with respect to the stellar continuum. The NISP simulator was configured including instrumental and astrophysical sources of noise such as the dark current, read-out noise, zodiacal background, and out-of-field stray light. In this preliminary study, we avoided contamination due to the overlap of the slitless spectra. For this purpose, we located the galaxies on a grid and simulated only the first order spectra.We inferred the 3.5ÎŽ NISP red grism spectroscopic detection limit of the continuum measured in the H band for star-forming galaxies with a median disk half-light radius of 0: 004 at magnitude H = 19:5 = 0:2ABmag for the Euclid Wide Survey and at H = 20:8 = 0:6ABmag for the Euclid Deep Survey. We found a very good agreement with the red grism emission line detection limit requirement for the Wide and Deep surveys. We characterised the effect of the galaxy shape on the detection capability of the red grism and highlighted the degradation of the quality of the extracted spectra as the disk size increased. In particular, we found that the extracted emission line signal-to-noise ratio (S/N) drops by 45% when the disk size ranges from 0: 0025 to 100. These trends lead to a correlation between the emission line S/N and the stellar mass of the galaxy and we demonstrate the effect in a stacking analysis unveiling emission lines otherwise too faint to detect

    Euclid: Covariance of weak lensing pseudo-C_ell estimates. Calculation, comparison to simulations, and dependence on survey geometry

    Get PDF
    An accurate covariance matrix is essential for obtaining reliable cosmological results when using a Gaussian likelihood. In this paper we study the covariance of pseudo-C_ estimates of tomographic cosmic shear power spectra. Using two existing publicly available codes in combination, we calculate the full covariance matrix, including mode-coupling contributions arising from both partial sky coverage and non-linear structure growth. For three different sky masks, we compare the theoretical covariance matrix to that estimated from publicly available N-body weak lensing simulations, finding good agreement. We find that as a more extreme sky cut is applied, a corresponding increase in both Gaussian off-diagonal covariance and non-Gaussian super-sample covariance is observed in both theory and simulations, in accordance with expectations. Studying the different contributions to the covariance in detail, we find that the Gaussian covariance dominates along the main diagonal and the closest off-diagonals, but further away from the main diagonal the super-sample covariance is dominant. Forming mock constraints in parameters describing matter clustering and dark energy, we find that neglecting non-Gaussian contributions to the covariance can lead to underestimating the true size of confidence regions by up to 70 per cent. The dominant non-Gaussian covariance component is the super-sample covariance, but neglecting the smaller connected non-Gaussian covariance can still lead to the underestimation of uncertainties by 10--20 per cent. A real cosmological analysis will require marginalisation over many nuisance parameters, which will decrease the relative importance of all cosmological contributions to the covariance, so these values should be taken as upper limits on the importance of each component

    Euclid: Covariance of weak lensing pseudo-Cl estimates: Calculation, comparison to simulations, and dependence on survey geometry

    Get PDF
    An accurate covariance matrix is essential for obtaining reliable cosmological results when using a Gaussian likelihood. In this paper we study the covariance of pseudo-Cestimates of tomographic cosmic shear power spectra. Using two existing publicly available codes in combination, we calculate the full covariance matrix, including mode-coupling contributions arising from both partial sky coverage and non-linear structure growth. For three different sky masks, we compare the theoretical covariance matrix to that estimated from publicly available N-body weak lensing simulations, finding good agreement. We find that as a more extreme sky cut is applied, a corresponding increase in both Gaussian off-diagonal covariance and non-Gaussian super-sample covariance is observed in both theory and simulations, in accordance with expectations. Studying the different contributions to the covariance in detail, we find that the Gaussian covariance dominates along the main diagonal and the closest off-diagonals, but farther away from the main diagonal the super-sample covariance is dominant. Forming mock constraints in parameters that describe matter clustering and dark energy, we find that neglecting non-Gaussian contributions to the covariance can lead to underestimating the true size of confidence regions by up to 70 per cent. The dominant non-Gaussian covariance component is the super-sample covariance, but neglecting the smaller connected non-Gaussian covariance can still lead to the underestimation of uncertainties by 10-20 per cent. A real cosmological analysis will require marginalisation over many nuisance parameters, which will decrease the relative importance of all cosmological contributions to the covariance, so these values should be taken as upper limits on the importance of each component

    Euclid: Forecast constraints on consistency tests of the ∧cDM model

    Get PDF
    Context. The standard cosmological model is based on the fundamental assumptions of a spatially homogeneous and isotropic universe on large scales. An observational detection of a violation of these assumptions at any redshift would immediately indicate the presence of new physics. Aims. We quantify the ability of the Euclid mission, together with contemporary surveys, to improve the current sensitivity of null tests of the canonical cosmological constant ∧ and the cold dark matter (∧ CDM) model in the redshift range 0 < 1.8. Methods. We considered both currently available data and simulated Euclid and external data products based on a ∧CDM fiducial model, an evolving dark energy model assuming the Chevallier-Polarski-Linder parameterization or an inhomogeneous Lemaßtre-Tolman-Bondi model with a cosmological constant ∧, and carried out two separate but complementary analyses: A machine learning reconstruction of the null tests based on genetic algorithms, and a theory-Agnostic parametric approach based on Taylor expansion and binning of the data, in order to avoid assumptions about any particular model. Results. We find that in combination with external probes, Euclid can improve current constraints on null tests of the ∧CDM by approximately a factor of three when using the machine learning approach and by a further factor of two in the case of the parametric approach. However, we also find that in certain cases, the parametric approach may be biased against or missing some features of models far from ∧CDM. Conclusions. Our analysis highlights the importance of synergies between Euclid and other surveys. These synergies are crucial for providing tighter constraints over an extended redshift range for a plethora of different consistency tests of some of the main assumptions of the current cosmological paradigm

    Euclid: Cosmological forecasts from the void size function

    Get PDF
    The Euclid mission −- with its spectroscopic galaxy survey covering a sky area over 15 000 deg215\,000 \ \mathrm{deg}^2 in the redshift range 0.9<1.8 −0.9<1.8\ - will provide a sample of tens of thousands of cosmic voids. This paper explores for the first time the constraining power of the void size function on the properties of dark energy (DE) from a survey mock catalogue, the official Euclid Flagship simulation. We identify voids in the Flagship light-cone, which closely matches the features of the upcoming Euclid spectroscopic data set. We model the void size function considering a state-of-the art methodology: we rely on the volume conserving (Vdn) model, a modification of the popular Sheth & van de Weygaert model for void number counts, extended by means of a linear function of the large-scale galaxy bias. We find an excellent agreement between model predictions and measured mock void number counts. We compute updated forecasts for the Euclid mission on DE from the void size function and provide reliable void number estimates to serve as a basis for further forecasts of cosmological applications using voids. We analyse two different cosmological models for DE: the first described by a constant DE equation of state parameter, ww, and the second by a dynamic equation of state with coefficients w0w_0 and waw_a. We forecast 1σ1\sigma errors on ww lower than the 10%10\%, and we estimate an expected figure of merit (FoM) for the dynamical DE scenario FoMw0,wa=17\mathrm{FoM}_{w_0,w_a} = 17 when considering only the neutrino mass as additional free parameter of the model. The analysis is based on conservative assumptions to ensure full robustness, and is a pathfinder for future enhancements of the technique. Our results showcase the impressive constraining power of the void size function from the Euclid spectroscopic sample, both as a stand-alone probe, and to be combined with other Euclid cosmological probes...

    Euclid preparation: X. The Euclid photometric-redshift challenge

    Get PDF
    Forthcoming large photometric surveys for cosmology require precise and accurate photometric redshift (photo-z) measurements for the success of their main science objectives. However, to date, no method has been able to produce photo-zs at the required accuracy using only the broad-band photometry that those surveys will provide. An assessment of the strengths and weaknesses of current methods is a crucial step in the eventual development of an approach to meet this challenge. We report on the performance of 13 photometric redshift code single value redshift estimates and redshift probability distributions (PDZs) on a common set of data, focusing particularly on the 0.2−2.6 redshift range that the Euclid mission will probe. We designed a challenge using emulated Euclid data drawn from three photometric surveys of the COSMOS field. The data was divided into two samples: one calibration sample for which photometry and redshifts were provided to the participants; and the validation sample, containing only the photometry to ensure a blinded test of the methods. Participants were invited to provide a redshift single value estimate and a PDZ for each source in the validation sample, along with a rejection flag that indicates the sources they consider unfit for use in cosmological analyses. The performance of each method was assessed through a set of informative metrics, using cross-matched spectroscopic and highlyaccurate photometric redshifts as the ground truth. We show that the rejection criteria set by participants are efficient in removing strong outliers, that is to say sources for which the photo-z deviates by more than 0.15(1 + z) from the spectroscopic-redshift (spec-z). We also show that, while all methods are able to provide reliable single value estimates, several machine-learning methods do not manage to produce useful PDZs. We find that no machine-learning method provides good results in the regions of galaxy color-space that are sparsely populated by spectroscopic-redshifts, for example z > 1. However they generally perform better than template-fitting methods at low redshift (z < 0.7), indicating that template-fitting methods do not use all of the information contained in the photometry. We introduce metrics that quantify both photo-z precision and completeness of the samples (post-rejection), since both contribute to the final figure of merit of the science goals of the survey (e.g., cosmic shear from Euclid). Template-fitting methods provide the best results in these metrics, but we show that a combination of template-fitting results and machine-learning results with rejection criteria can outperform any individual method. On this basis, we argue that further work in identifying how to best select between machine-learning and template-fitting approaches for each individual galaxy should be pursued as a priority
    • 

    corecore