618 research outputs found
Systematic Errors in Future Weak Lensing Surveys: Requirements and Prospects for Self-Calibration
We study the impact of systematic errors on planned weak lensing surveys and
compute the requirements on their contributions so that they are not a dominant
source of the cosmological parameter error budget. The generic types of error
we consider are multiplicative and additive errors in measurements of shear, as
well as photometric redshift errors. In general, more powerful surveys have
stronger systematic requirements. For example, for a SNAP-type survey the
multiplicative error in shear needs to be smaller than 1%(fsky/0.025)^{-1/2} of
the mean shear in any given redshift bin, while the centroids of photometric
redshift bins need to be known to better than 0.003(fsky/0.025)^{-1/2}. With
about a factor of two degradation in cosmological parameter errors, future
surveys can enter a self-calibration regime, where the mean systematic biases
are self-consistently determined from the survey and only higher-order moments
of the systematics contribute. Interestingly, once the power spectrum
measurements are combined with the bispectrum, the self-calibration regime in
the variation of the equation of state of dark energy w_a is attained with only
a 20-30% error degradation.Comment: 20 pages, 9 figures, to be submitted to MNRAS. Comments are welcom
Parameterization of Dark-Energy Properties: a Principal-Component Approach
Considerable work has been devoted to the question of how to best
parameterize the properties of dark energy, in particular its equation of state
w. We argue that, in the absence of a compelling model for dark energy, the
parameterizations of functions about which we have no prior knowledge, such as
w(z), should be determined by the data rather than by our ingrained beliefs or
familiar series expansions. We find the complete basis of orthonormal
eigenfunctions in which the principal components (weights of w(z)) that are
determined most accurately are separated from those determined most poorly.
Furthermore, we show that keeping a few of the best-measured modes can be an
effective way of obtaining information about w(z).Comment: Unfeasibility of a truly model-independent reconstruction of w at z>1
illustrated. f(z) left out, and w(z) discussed in more detail. Matches the
PRL versio
Model-independent determination of the cosmic expansion rate. I. Application to type-Ia supernovae
Aims: In view of the substantial uncertainties regarding the possible
dynamics of the dark energy, we aim at constraining the expansion rate of the
universe without reference to a specific Friedmann model and its parameters.
Methods: We show that cosmological observables integrating over the cosmic
expansion rate can be converted into a Volterra integral equation which is
known to have a unique solution in terms of a Neumann series. Expanding
observables such as the luminosity distances to type-Ia supernovae into a
series of orthonormal functions, the integral equation can be solved and the
cosmic expansion rate recovered within the limits allowed by the accuracy of
the data. Results: We demonstrate the performance of the method applying it to
synthetic data sets of increasing complexity, and to the first-year SNLS data.
In particular, we show that the method is capable of reproducing a hypothetical
expansion function containing a sudden transition.Comment: 9 pages, 8 figures; accepted by A&A; subsection 3.6 added, new
references and minor change
How Future Space-Based Weak Lensing Surveys Might Obtain Photometric Redshifts Independently
We study how the addition of on-board optical photometric bands to future
space-based weak lensing instruments could affect the photometric redshift
estimation of galaxies, and hence improve estimations of the dark energy
parameters through weak lensing. Basing our study on the current proposed
Euclid configuration and using a mock catalog of galaxy observations, various
on-board options are tested and compared with the use of ground-based
observations from the Large Synoptic Survey Telescope (LSST) and Pan-STARRS.
Comparisons are made through the use of the dark energy Figure of Merit, which
provides a quantifiable measure of the change in the quality of the scientific
results that can be obtained in each scenario. Effects of systematic offsets
between LSST and Euclid photometric calibration are also studied. We find that
adding two (U and G) or even one (U) on-board optical band-passes to the
space-based infrared instrument greatly improves its photometric redshift
performance, bringing it close to the level that would be achieved by combining
observations from both space-based and ground-based surveys while freeing the
space mission from reliance on external datasets.Comment: Accepted for publication in PASP. A high-quality version of Fig 1 can
be found on http://www.ap.smu.ca/~sawicki/DEphoto
Effect of Photometric Redshift Uncertainties on Weak Lensing Tomography
We perform a systematic analysis of the effects of photometric redshift
uncertainties on weak lensing tomography. We describe the photo-z distribution
with a bias and Gaussian scatter that are allowed to vary arbitrarily between
intervals of dz = 0.1 in redshift.While the mere presence of bias and scatter
does not substantially degrade dark energy information, uncertainties in both
parameters do. For a fiducial next-generation survey each would need to be
known to better than about 0.003-0.01 in redshift for each interval in order to
lead to less than a factor of 1.5 increase in the dark energy parameter errors.
The more stringent requirement corresponds to a larger dark energy parameter
space, when redshift variation in the equation of state of dark energy is
allowed.Of order 10^4-10^5 galaxies with spectroscopic redshifts fairly sampled
from the source galaxy distribution will be needed to achieve this level of
calibration. If the sample is composed of multiple galaxy types, a fair sample
would be required for each. These requirements increase in stringency for more
ambitious surveys; we quantify such scalings with a convenient fitting formula.
No single aspect of a photometrically binned selection of galaxies such as
their mean or median suffices, indicating that dark energy parameter
determinations are sensitive to the shape and nature of outliers in the photo-z
redshift distribution.Comment: 10 pages, 12 figures, accepted by Ap
No evidence for the cold spot in the NVSS radio survey
We revisit recent claims that there is a ‘cold spot’ in both number counts and brightness of radio sources in the NRAO (National Radio Astronomy Observatory) VLA (Very Large Array) Sky Survey (NVSS), with location coincident with the previously detected cold spot in Wilkinson Microwave Anisotropy Probe . Such matching cold spots would be difficult if not impossible to explain in the standard Λcold dark matter cosmological model. Contrary to the claim, we find no significant evidence for the radio cold spot, after including systematic effects in NVSS, and carefully accounting for the effect of a posteriori choices when assessing statistical significance.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/79150/1/j.1365-2966.2009.15732.x.pd
An optimal basis system for cosmology: data analysis and new parameterisation
We define an optimal basis system into which cosmological observables can be
decomposed. The basis system can be optimised for a specific cosmological model
or for an ensemble of models, even if based on drastically different physical
assumptions. The projection coefficients derived from this basis system, the
so-called features, provide a common parameterisation for studying and
comparing different cosmological models independently of their physical
construction. They can be used to directly compare different cosmologies and
study their degeneracies in terms of a simple metric separation. This is a very
convenient approach, since only very few realisations have to be computed, in
contrast to Markov-Chain Monte Carlo methods. Finally, the proposed basis
system can be applied to reconstruct the Hubble expansion rate from supernova
luminosity distance data with the advantage of being sensitive to possible
unexpected features in the data set. We test the method both on mock catalogues
and on the SuperNova Legacy Survey data set.Comment: 7 pages, 5 figures, 1 table, replaced to match version accepted by
A&
Supernovae, Lensed CMB and Dark Energy
Supernova distance and primary CMB anisotropy measurements provide powerful
probes of the dark energy evolution in a flat universe but degrade
substantially once curvature is marginalized. We show that lensed CMB
polarization power spectrum measurements, accessible to next generation ground
based surveys such as SPTpol or QUIET, can remove the curvature degeneracy at a
level sufficient for the SNAP and Planck surveys and allow a measurement of
sigma(w_p)=0.03, sigma(w_a)=0.3 jointly with sigma(Omega_K)=0.0035. This
expectation assumes that the sum of neutrino masses is independently known to
better than 0.1 eV. This assumption is valid if the lightest neutrino is
assumed to have negligible mass in a normal neutrino mass hierarchy and is
potentially testable with upcoming direct laboratory measurements.Comment: 4 pages, 4 figures, submitted to ApJ
Diagnosing space telescope misalignment and jitter using stellar images
Accurate knowledge of the telescope's point spread function (PSF) is
essential for the weak gravitational lensing measurements that hold great
promise for cosmological constraints. For space telescopes, the PSF may vary
with time due to thermal drifts in the telescope structure, and/or due to
jitter in the spacecraft pointing (ground-based telescopes have additional
sources of variation). We describe and simulate a procedure for using the
images of the stars in each exposure to determine the misalignment and jitter
parameters, and reconstruct the PSF at any point in that exposure's field of
view. The simulation uses the design of the SNAP (http://snap.lbl.gov)
telescope. Stellar-image data in a typical exposure determines secondary-mirror
positions as precisely as . The PSF ellipticities and size, which
are the quantities of interest for weak lensing are determined to and accuracies respectively in each exposure,
sufficient to meet weak-lensing requirements. We show that, for the case of a
space telescope, the PSF estimation errors scale inversely with the square root
of the total number of photons collected from all the usable stars in the
exposure.Comment: 20 pages, 6 figs, submitted to PAS
- …