1,368 research outputs found

    A Model for Multi-property Galaxy Cluster Statistics

    Full text link
    The massive dark matter halos that host groups and clusters of galaxies have observable properties that appear to be log-normally distributed about power-law mean scaling relations in halo mass. Coupling this assumption with either quadratic or cubic approximations to the mass function in log space, we derive closed-form expressions for the space density of halos as a function of multiple observables as well as forms for the low-order moments of properties of observable-selected samples. Using a Tinker mass function in a {\Lambda}CDM cosmology, we show that the cubic analytic model reproduces results obtained from direct, numerical convolution at the 10 percent level or better over nearly the full range of observables covered by current observations and for redshifts extending to z = 1.5. The model provides an efficient framework for estimating effects arising from selection and covariance among observable properties in survey samples.Comment: 9 pages, 4 figures, uses on-line mass function calculator http://hmf.icrar.org/. Submitted to MNRA

    Cosmology from supernova magnification maps

    Full text link
    High-z Type Ia supernovae are expected to be gravitationally lensed by the foreground distribution of large-scale structure. The resulting magnification of supernovae is statistically measurable, and the angular correlation of the magnification pattern directly probes the integrated mass density along the line of sight. Measurements of cosmic magnification of supernovae therefore complements galaxy shear measurements in providing a direct measure of clustering of the dark matter. As the number of supernovae is typically much smaller than the number of sheared galaxies, the two-point correlation function of lensed Type Ia supernovae suffers from significantly increased shot noise. Neverthless, we find that the magnification map of a large sample of supernovae, such as that expected from next generation dedicated searches, will be easily measurable and provide an important cosmological tool. For example, a search over 20 sq. deg. over five years leading to a sample of ~ 10,000 supernovae would measure the angular power spectrum of cosmic magnification with a cumulative signal-to-noise ratio of ~20. This detection can be further improved once the supernova distance measurements are cross-correlated with measurements of the foreground galaxy distribution. The magnification maps made using supernovae can be used for important cross-checks with traditional lensing shear statistics obtained in the same fields, as well as help to control systematics. We discuss two applications of supernova magnification maps: the breaking of the mass-sheet degeneracy when estimating masses of shear-detected clusters, and constraining the second-order corrections to weak lensing observables.Comment: 4 pages, 2 figures, ApJL submitted; "Signal" discussed here is the extra covariance in astro-ph/050958

    Problems with Pencils: Lensing Covariance of Supernova Distance Measurements

    Full text link
    While luminosity distances from Type Ia supernovae (SNe) provide a powerful probe of cosmological parameters, the accuracy with which these distances can be measured is limited by cosmic magnification due to gravitational lensing by the intervening large-scale structure. Spatial clustering of foreground mass fluctuations leads to correlated errors in distance estimates from SNe. By including the full covariance matrix of supernova distance measurements, we show that a future survey covering more than a few square degrees on the sky, and assuming a total of ~2000 SNe, will be largely unaffected by covariance noise. ``Pencil beam'' surveys with small fields of view, however, will be prone to the lensing covariance, leading to potentially significant degradations in cosmological parameter estimates. For a survey with 30 arcmin mean separation between SNe, lensing covariance leads to a ~45% increase in the expected errors in dark energy parameters compared to fully neglecting lensing, and a ~20% increase compared to including just the lensing variance. Given that the lensing covariance is cosmology dependent and cannot be mapped out sufficiently accurately with direct weak lensing observations, surveys with small mean SN separation must incorporate the effects of lensing covariance, including its dependence on the cosmological parameters.Comment: 4 pages, 2 figures, PRL submitted; "Noise" discussed here is the "signal" in astro-ph/050957

    Gravitational Lensing as a Probe of Quintessence

    Full text link
    A large number of cosmological studies now suggest that roughly two-thirds of the critical energy density of the Universe exists in a component with negative pressure. If the equation of state of such an energy component varies with time, it should in principle be possible to identify such a variation using cosmological probes over a wide range in redshift. Proper detection of any time variation, however, requires cosmological probes beyond the currently studied range in redshift of \sim 0.1 to 1. We extend our analysis to gravitational lensing statistics at high redshift and suggest that a reliable sample of lensed sources, out to a redshift of \sim 5, can be used to constrain the variation of the equation of state, provided that both the redshift distribution of lensed sources and the selection function involved with the lensed source discovery process are known. An exciting opportunity to catalog an adequate sample of lensed sources (quasars) to probe quintessence is now available with the ongoing Sloan Digital Sky Survey. Writing w(z)w0+z(dw/dz)0w(z)\approx w_0 + z (dw/dz)_0, we study the expected accuracy to which the equation of state today w0w_0 and its rate of change (dw/dz)0(dw/dz)_0 can simultaneously be constrained. Such a determination can rule out some missing-energy candidates, such as classes of quintessence models or a cosmological constant.Comment: Accepted for publication in ApJ Letters (4 pages, including 4 figures

    Fast approximation of centrality and distances in hyperbolic graphs

    Full text link
    We show that the eccentricities (and thus the centrality indices) of all vertices of a δ\delta-hyperbolic graph G=(V,E)G=(V,E) can be computed in linear time with an additive one-sided error of at most cδc\delta, i.e., after a linear time preprocessing, for every vertex vv of GG one can compute in O(1)O(1) time an estimate e^(v)\hat{e}(v) of its eccentricity eccG(v)ecc_G(v) such that eccG(v)e^(v)eccG(v)+cδecc_G(v)\leq \hat{e}(v)\leq ecc_G(v)+ c\delta for a small constant cc. We prove that every δ\delta-hyperbolic graph GG has a shortest path tree, constructible in linear time, such that for every vertex vv of GG, eccG(v)eccT(v)eccG(v)+cδecc_G(v)\leq ecc_T(v)\leq ecc_G(v)+ c\delta. These results are based on an interesting monotonicity property of the eccentricity function of hyperbolic graphs: the closer a vertex is to the center of GG, the smaller its eccentricity is. We also show that the distance matrix of GG with an additive one-sided error of at most cδc'\delta can be computed in O(V2log2V)O(|V|^2\log^2|V|) time, where c<cc'< c is a small constant. Recent empirical studies show that many real-world graphs (including Internet application networks, web networks, collaboration networks, social networks, biological networks, and others) have small hyperbolicity. So, we analyze the performance of our algorithms for approximating centrality and distance matrix on a number of real-world networks. Our experimental results show that the obtained estimates are even better than the theoretical bounds.Comment: arXiv admin note: text overlap with arXiv:1506.01799 by other author

    Localised projective measurement of a relativistic quantum field in non-inertial frames

    Full text link
    We propose a scheme to study the effect of motion on measurements of a quantum field carried out by a finite-size detector. We introduce a model of projective detection of a localised field mode in an arbitrary reference frame. We apply it to extract vacuum entanglement by a pair of counter-accelerating detectors and to estimate the Unruh temperature of a single accelerated detector. The introduced method allows us to directly relate the observed effects with the instantaneous proper acceleration of the detector.Comment: 5 pages, 2 figures. v2 Significant increase in the detail level regarding the motivation of the detector mode

    Application of a D Number based LBWA Model and an Interval MABAC Model in Selection of an Automatic Cannon for Integration into Combat Vehicles

    Get PDF
    A decision making procedure for selection of a weapon system involves different, often contradictory criteriaand reaching decisions under conditions of uncertainty. This paper proposes a novel multi-criteria methodology based on D numbers which enables efficient analysis of the information used for decision making. The proposed methodology has been developed in order to enable selection of an efficient weapon system under conditions when a large number of hierarchically structured evaluation criteria has to be processed. A novel D number based Level Based Weight Assessment – Multi Attributive Border Approximation area Comparison (D LBWA-MABAC) model is used for selection of an automatic cannon for integration into combat vehicles. Criteria weights are determined based on the improved LBWA-D model. The traditional MABAC method has been further developed by integration of interval numbers. A hybrid D LBWA-MABAC framework is used for evaluation of an automatic cannon for integration into combat vehicles. Nine weapon systems used worldwide have been ranked in this paper. This multicriteria approach allows decision makers to assess options objectively and reach a rational decision regarding the selection of an optimal weapon system. Validation of the proposed methodology is performed through sensitivity analysis which studies how changes in the weights of the best criterion and the elasticity coefficient affect the ranking results
    corecore