184 research outputs found

    Cosmological constraints on unparticle dark matter

    Full text link
    In unparticle dark matter (unmatter) models the equation of state of the unmatter is given by p=ρ/(2dU+1)p=\rho/(2d_U+1), where dUd_U is the scaling factor. Unmatter with such equations of state would have a significant impact on the expansion history of the universe. Using type Ia supernovae (SNIa), the baryon acoustic oscillation (BAO) measurements and the shift parameter of the cosmic microwave background (CMB) to place constraints on such unmatter models we find that if only the SNIa data is used the constraints are weak. However, with the BAO and CMB shift parameter data added strong constraints can be obtained. For the Λ\LambdaUDM model, in which unmatter is the sole dark matter, we find that dU>60d_U > 60 at 95% C.L. For comparison, in most unparticle physics models it is assumed dU<2d_U<2. For the Λ\LambdaCUDM model, in which unmatter co-exists with cold dark matter, we found that the unmatter can at most make up a few percent of the total cosmic density if dU<10d_U<10, thus it can not be the major component of dark matter.Comment: Replaced with revised version. BAO data is added to make a tighter constraint. Version accepted for publication on Euro.Phys.J.

    Can sacrificial feeding areas protect aquatic plants from herbivore grazing? Using behavioural ecology to inform wildlife management

    Get PDF
    Effective wildlife management is needed for conservation, economic and human well-being objectives. However, traditional population control methods are frequently ineffective, unpopular with stakeholders, may affect non-target species, and can be both expensive and impractical to implement. New methods which address these issues and offer effective wildlife management are required. We used an individual-based model to predict the efficacy of a sacrificial feeding area in preventing grazing damage by mute swans (Cygnus olor) to adjacent river vegetation of high conservation and economic value. The accuracy of model predictions was assessed by a comparison with observed field data, whilst prediction robustness was evaluated using a sensitivity analysis. We used repeated simulations to evaluate how the efficacy of the sacrificial feeding area was regulated by (i) food quantity, (ii) food quality, and (iii) the functional response of the forager. Our model gave accurate predictions of aquatic plant biomass, carrying capacity, swan mortality, swan foraging effort, and river use. Our model predicted that increased sacrificial feeding area food quantity and quality would prevent the depletion of aquatic plant biomass by swans. When the functional response for vegetation in the sacrificial feeding area was increased, the food quantity and quality in the sacrificial feeding area required to protect adjacent aquatic plants were reduced. Our study demonstrates how the insights of behavioural ecology can be used to inform wildlife management. The principles that underpin our model predictions are likely to be valid across a range of different resource-consumer interactions, emphasising the generality of our approach to the evaluation of strategies for resolving wildlife management problems

    Type Ia supernova parameter estimation: a comparison of two approaches using current datasets

    Full text link
    By using the Sloan Digital Sky Survey (SDSS) first year type Ia supernova (SN Ia) compilation, we compare two different approaches (traditional \chi^2 and complete likelihood) to determine parameter constraints when the magnitude dispersion is to be estimated as well. We consider cosmological constant + Cold Dark Matter (\Lambda CDM) and spatially flat, constant w Dark Energy + Cold Dark Matter (FwCDM) cosmological models and show that, for current data, there is a small difference in the best fit values and \sim 30% difference in confidence contour areas in case the MLCS2k2 light-curve fitter is adopted. For the SALT2 light-curve fitter the differences are less significant (\lesssim 13% difference in areas). In both cases the likelihood approach gives more restrictive constraints. We argue for the importance of using the complete likelihood instead of the \chi^2 approach when dealing with parameters in the expression for the variance.Comment: 16 pages, 5 figures. More complete analysis by including peculiar velocities and correlations among SALT2 parameters. Use of 2D contours instead of 1D intervals for comparison. There can be now a significant difference between the approaches, around 30% in contour area for MLCS2k2 and up to 13% for SALT2. Generic streamlining of text and suppression of section on model selectio

    The Observed Growth of Massive Galaxy Clusters I: Statistical Methods and Cosmological Constraints

    Full text link
    (Abridged) This is the first of a series of papers in which we derive simultaneous constraints on cosmological parameters and X-ray scaling relations using observations of the growth of massive, X-ray flux-selected galaxy clusters. Our data set consists of 238 clusters drawn from the ROSAT All-Sky Survey, and incorporates extensive follow-up observations using the Chandra X-ray Observatory. Here we describe and implement a new statistical framework required to self-consistently produce simultaneous constraints on cosmology and scaling relations from such data, and present results on models of dark energy. In spatially flat models with a constant dark energy equation of state, w, the cluster data yield Omega_m=0.23 +- 0.04, sigma_8=0.82 +- 0.05, and w=-1.01 +- 0.20, marginalizing over conservative allowances for systematic uncertainties. These constraints agree well and are competitive with independent data in the form of cosmic microwave background (CMB) anisotropies, type Ia supernovae (SNIa), cluster gas mass fractions (fgas), baryon acoustic oscillations (BAO), galaxy redshift surveys, and cosmic shear. The combination of our data with current CMB, SNIa, fgas, and BAO data yields Omega_m=0.27 +- 0.02, sigma_8=0.79 +- 0.03, and w=-0.96 +- 0.06 for flat, constant w models. For evolving w models, marginalizing over transition redshifts in the range 0.05-1, we constrain the equation of state at late and early times to be respectively w_0=-0.88 +- 0.21 and w_et=-1.05 +0.20 -0.36. The combined data provide constraints equivalent to a DETF FoM of 15.5. Our results highlight the power of X-ray studies to constrain cosmology. However, the new statistical framework we apply to this task is equally applicable to cluster studies at other wavelengths.Comment: 16 pages, 7 figures. v4: final version (typographic corrections). Results can be downloaded at https://www.stanford.edu/group/xoc/papers/xlf2009.htm

    Designing Future Dark Energy Space Missions: II. Photometric Redshift of Space Weak Lensing Optimized Survey

    Full text link
    Accurate weak-lensing analysis requires not only accurate measurement of galaxy shapes but also precise and unbiased measurement of galaxy redshifts. The photometric redshift technique appears as the only possibility to determine the redshift of the background galaxies used in the weak-lensing analysis. Using the photometric redshift quality, simple shape measurement requirements, and a proper sky model, we explore what could be an optimal weak-lensing dark energy mission based on FoM calculation. We found that photometric redshifts reach their best accuracy for the bulk of the faint galaxy population when filters have a resolution R~3.2. We show that an optimal mission would survey the sky through 8 filters using 2 cameras (visible and near infrared). Assuming a 5-year mission duration, a mirror size of 1.5m, a 0.5deg2 FOV with a visible pixel scale of 0.15", we found that a homogeneous survey reaching IAB=25.6 (10sigma) with a sky coverage of ~11000deg2 maximizes the Weak Lensing FoM. The effective number density of galaxies then used for WL is ~45gal/arcmin2, at least a factor of two better than ground based survey. This work demonstrates that a full account of the observational strategy is required to properly optimize the instrument parameters to maximize the FoM of the future weak-lensing space dark energy mission.Comment: 25 pages, 39 figures, accepted in A&

    First-year Sloan Digital Sky Survey-II (SDSS-II) supernova results: consistency and constraints with other intermediate-redshift datasets

    Get PDF
    We present an analysis of the luminosity distances of Type Ia Supernovae from the Sloan Digital Sky Survey-II (SDSS-II) Supernova Survey in conjunction with other intermediate redshift (z<0.4) cosmological measurements including redshift-space distortions from the Two-degree Field Galaxy Redshift Survey (2dFGRS), the Integrated Sachs-Wolfe (ISW) effect seen by the SDSS, and the latest Baryon Acoustic Oscillation (BAO) distance scale from both the SDSS and 2dFGRS. We have analysed the SDSS-II SN data alone using a variety of "model-independent" methods and find evidence for an accelerating universe at >97% level from this single dataset. We find good agreement between the supernova and BAO distance measurements, both consistent with a Lambda-dominated CDM cosmology, as demonstrated through an analysis of the distance duality relationship between the luminosity (d_L) and angular diameter (d_A) distance measures. We then use these data to estimate w within this restricted redshift range (z<0.4). Our most stringent result comes from the combination of all our intermediate-redshift data (SDSS-II SNe, BAO, ISW and redshift-space distortions), giving w = -0.81 +0.16 -0.18(stat) +/- 0.15(sys) and Omega_M=0.22 +0.09 -0.08 assuming a flat universe. This value of w, and associated errors, only change slightly if curvature is allowed to vary, consistent with constraints from the Cosmic Microwave Background. We also consider more limited combinations of the geometrical (SN, BAO) and dynamical (ISW, redshift-space distortions) probes.Comment: 13 pages, 7 figures, accepted for publication in MNRA

    Prospects in Constraining the Dark Energy Potential

    Full text link
    We generalize to non-flat geometries the formalism of Simon et al. (2005) to reconstruct the dark energy potential. This formalism makes use of quantities similar to the Horizon-flow parameters in inflation, can, in principle, be made non-parametric and is general enough to be applied outside the simple, single scalar field quintessence. Since presently available and forthcoming data do not allow a non-parametric and exact reconstruction of the potential, we consider a general parametric description in term of Chebyshev polynomials. We then consider present and future measurements of H(z), Baryon Acoustic Oscillations surveys and Supernovae type 1A surveys, and investigate their constraints on the dark energy potential. We find that, relaxing the flatness assumption increases the errors on the reconstructed dark energy evolution but does not open up significant degeneracies, provided that a modest prior on geometry is imposed. Direct measurements of H(z), such as those provided by BAO surveys, are crucially important to constrain the evolution of the dark energy potential and the dark energy equation of state, especially for non-trivial deviations from the standard LambdaCDM model.Comment: 22 pages, 7 figures. 2 references correcte

    Objective surface evaluation of fiber reinforced polymer composites

    Full text link
    The mechanical properties of advanced composites are essential for their structural performance, but the surface finish on exterior composite panels is of critical importance for customer satisfaction. This paper describes the application of wavelet texture analysis (WTA) to the task of automatically classifying the surface finish properties of two fiber reinforced polymer (FRP) composite construction types (clear resin and gel-coat) into three quality grades. Samples were imaged and wavelet multi-scale decomposition was used to create a visual texture representation of the sample, capturing image features at different scales and orientations. Principal components analysis was used to reduce the dimensionality of the texture feature vector, permitting successful classification of the samples using only the first principal component. This work extends and further validates the feasibility of this approach as the basis for automated non-contact classification of composite surface finish using image analysis.<br /
    corecore