764 research outputs found

    Sample variance in photometric redshift calibration: cosmological biases and survey requirements

    Get PDF
    We use N-body/photometric galaxy simulations to examine the impact of sample variance of spectroscopic redshift samples on the accuracy of photometric redshift (photo-z) determination and calibration of photo-z errors. We estimate the biases in the cosmological parameter constraints from weak lensing and derive requirements on the spectroscopic follow-up for three different photo-z algorithms chosen to broadly span the range of algorithms available. We find that sample variance is much more relevant for the photo-z error calibration than for photo-z training, implying that follow-up requirements are similar for different algorithms. We demonstrate that the spectroscopic sample can be used for training of photo-zs and error calibration without incurring additional bias in the cosmological parameters. We provide a guide for observing proposals for the spectroscopic follow-up to ensure that redshift calibration biases do not dominate the cosmological parameter error budget. For example, assuming optimistically (pessimistically) that the weak lensing shear measurements from the Dark Energy Survey could obtain 1σ constraints on the dark energy equation of state w of 0.035 (0.055), implies a follow-up requirement of 150 (40) patches of sky with a telescope such as Magellan, assuming a 1/8 deg2 effective field of view and 400 galaxies per patch. Assuming (optimistically) a VIMOS-VLT Deep Survey-like spectroscopic completeness with purely random failures, this could be accomplished with about 75 (20) nights of observation. For more realistic assumptions regarding spectroscopic completeness, or with the presence of other sources of systematics not considered here, further degradations to dark energy constraints are possible. We test several approaches for making the requirements less stringent. For example, if the redshift distribution of the overall sample can be estimated by some other technique, e.g. cross-correlation, then follow-up requirements could be reduced by an order of magnitud

    A High Throughput Workflow Environment for Cosmological Simulations

    Get PDF
    The next generation of wide-area sky surveys offer the power to place extremely precise constraints on cosmological parameters and to test the source of cosmic acceleration. These observational programs will employ multiple techniques based on a variety of statistical signatures of galaxies and large-scale structure. These techniques have sources of systematic error that need to be understood at the percent-level in order to fully leverage the power of next-generation catalogs. Simulations of large-scale structure provide the means to characterize these uncertainties. We are using XSEDE resources to produce multiple synthetic sky surveys of galaxies and large-scale structure in support of science analysis for the Dark Energy Survey. In order to scale up our production to the level of fifty 10^10-particle simulations, we are working to embed production control within the Apache Airavata workflow environment. We explain our methods and report how the workflow has reduced production time by 40% compared to manual management.Comment: 8 pages, 5 figures. V2 corrects an error in figure

    Cross-correlation Weak Lensing of SDSS Galaxy Clusters III: Mass-to-light Ratios

    Get PDF
    We present measurements of the excess mass-to-light ratio measured aroundMaxBCG galaxy clusters observed in the SDSS. This red sequence cluster sample includes objects from small groups with masses ranging from ~5x10^{12} to ~10^{15} M_{sun}/h. Using cross-correlation weak lensing, we measure the excess mass density profile above the universal mean \Delta \rho(r) = \rho(r) - \bar{\rho} for clusters in bins of richness and optical luminosity. We also measure the excess luminosity density \Delta l(r) = l(r) - \bar{l} measured in the z=0.25 i-band. For both mass and light, we de-project the profiles to produce 3D mass and light profiles over scales from 25 kpc/ to 22 Mpc/h. From these profiles we calculate the cumulative excess mass M(r) and excess light L(r) as a function of separation from the BCG. On small scales, where \rho(r) >> \bar{\rho}, the integrated mass-to-light profile may be interpreted as the cluster mass-to-light ratio. We find the M/L_{200}, the mass-to-light ratio within r_{200}, scales with cluster mass as a power law with index 0.33+/-0.02. On large scales, where \rho(r) ~ \bar{\rho}, the M/L approaches an asymptotic value independent of cluster richness. For small groups, the mean M/L_{200} is much smaller than the asymptotic value, while for large clusters it is consistent with the asymptotic value. This asymptotic value should be proportional to the mean mass-to-light ratio of the universe . We find /b^2_{ml} = 362+/-54 h (statistical). There is additional uncertainty in the overall calibration at the ~10% level. The parameter b_{ml} is primarily a function of the bias of the L <~ L_* galaxies used as light tracers, and should be of order unity. Multiplying by the luminosity density in the same bandpass we find \Omega_m/b^2_{ml} = 0.02+/-0.03, independent of the Hubble parameter.Comment: Third paper in a series; v2.0 incorporates ApJ referee's suggestion

    Future Evolution of Structure in an Accelerating Universe

    Full text link
    Current cosmological data indicate that our universe contains a substantial component of dark vacuum energy that is driving the cosmos to accelerate. We examine the immediate and longer term consequences of this dark energy (assumed here to have a constant density). Using analytic calculations and supporting numerical simulations, we present criteria for test bodies to remain bound to existing structures. We show that collapsed halos become spatially isolated and dynamically relax to a particular density profile with logarithmic slope steeper than -3 at radii beyond r_200. The asymptotic form of the space-time metric is then specified. We develop this scenario further by determining the effects of the accelerating expansion on the background radiation fields and individual particles. In an appendix, we generalize these results to include quintessence.Comment: 12 pages, 6 figures. Submitted to ApJ on April 23, 200

    Cognitive reserve proxies do not differentially account for cognitive performance in patients with focal frontal and non-frontal lesions

    Get PDF
    Objective: Cognitive reserve (CR) suggests that premorbid efficacy, aptitude, and flexibility of cognitive processing can aid the brain\u2019s ability to cope with change or damage. Our previous work has shown that age and literacy attainment predict the cognitive performance of frontal patients on frontal-executive tests. However, it remains unknown whether CR also predicts the cognitive performance of non-frontal patients. Method: We investigated the independent effect of a CR proxy, National Adult Reading Test (NART) IQ, as well as age and lesion group (frontal vs. non-frontal) on measures of executive function, intelligence, processing speed, and naming in 166 patients with focal, unilateral frontal lesions; 91 patients with focal, unilateral non-frontal lesions; and 136 healthy controls. Results: Fitting multiple linear regression models for each cognitive measure revealed that NART IQ predicted executive, intelligence, and naming performance. Age also significantly predicted performance on the executive and processing speed tests. Finally, belonging to the frontal group predicted executive and naming performance, while membership of the non-frontal group predicted intelligence. Conclusions: These findings suggest that age, lesion group, and literacy attainment play independent roles in predicting cognitive performance following stroke or brain tumour. However, the relationship between CR and focal brain damage does not differ in the context of frontal and non-frontal lesions

    The Dynamical State and Mass-Concentration Relation of Galaxy Clusters

    Full text link
    We use the Millennium Simulation series to study how the dynamical state of dark matter halos affects the relation between mass and concentration. We find that a large fraction of massive systems are identified when they are substantially out of equilibrium and in a particular phase of their dynamical evolution: the more massive the halo, the more likely it is found at a transient stage of high concentration. This state reflects the recent assembly of massive halos and corresponds to the first pericentric passage of recently-accreted material when, before virialization, the kinetic and potential energies reach maximum and minimum values, respectively. This result explains the puzzling upturn in the mass-concentration relation reported in recent work for massive halos; indeed, the upturn disappears when only dynamically-relaxed systems are considered in the analysis. Our results warn against applying simple equilibrium models to describe the structure of rare, massive galaxy clusters and urges caution when extrapolating scaling laws calibrated on lower-mass systems, where such deviations from equilibrium are less common. The evolving dynamical state of galaxy clusters ought to be carefully taken into account if cluster studies are to provide precise cosmological constraints.Comment: 8 Pages. Minor changes to match published versio

    The asymptotic structure of space-time

    Full text link
    Astronomical observations strongly suggest that our universe is now accelerating and contains a substantial admixture of dark vacuum energy. Using numerical simulations to study this newly consolidated cosmological model (with a constant density of dark energy), we show that astronomical structures freeze out in the near future and that the density profiles of dark matter halos approach the same general form. Every dark matter halo grows asymptotically isolated and thereby becomes the center of its own island universe. Each of these isolated regions of space-time approaches a universal geometry and we calculate the corresponding form of the space-time metric.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/60612/1/Adams2003Asymptotic.pd

    Higher-order factors of personality: Do they exist?

    Get PDF
    Scales that measure the Big Five personality factors are often substantially intercorrelated. These correlations are sometimes interpreted as implying the existence of two higher order factors of personality. The authors show that correlations between measures of broad personality factors do not necessarily imply the existence of higher order factors and might instead be due to variables that represent same-signed blends of orthogonal factors. Therefore, the hypotheses of higher order factors and blended variables can only be tested with data on lower level personality variables that define the personality factors. The authors compared the higher order factor model and the blended variable model in three participant samples using the Big Five Aspect Scales, and found better fit for the latter model. In other analyses using the HEXACO Personality Inventory, they identified mutually uncorrelated markers of six personality factors. The authors conclude that correlations between personality factor scales can be explained without postulating any higher order dimensions of personality. © 2009 by the Society for Personality and Social Psychology, Inc
    • 

    corecore