1,150 research outputs found

    Cross-correlation cosmography with HI intensity mapping

    Get PDF
    The cross-correlation of a foreground density field with two different background convergence fields can be used to measure cosmographic distance ratios and constrain dark energy parameters. We investigate the possibility of performing such measurements using a combination of optical galaxy surveys and HI intensity mapping surveys, with emphasis on the performance of the planned Square Kilometre Array (SKA). Using HI intensity mapping to probe the foreground density tracer field and/or the background source fields has the advantage of excellent redshift resolution and a longer lever arm achieved by using the lensing signal from high redshift background sources. Our results show that, for our best SKA-optical configuration of surveys, a constant equation of state for dark energy can be constrained to ≃8%\simeq 8\% for a sky coverage fsky=0.5f_{\rm sky}=0.5 and assuming a σ(ΩDE)=0.03\sigma(\Omega_{\rm DE})=0.03 prior for the dark energy density parameter. We also show that using the CMB as the second source plane is not competitive, even when considering a COrE-like satellite.Comment: 10 pages, 8 figures, 1 table; version accepted for publication in Physical Review

    Who uses bottled gas ? evidence from households in developing countries

    Get PDF
    Household surveys in Guatemala, India, Indonesia, Kenya, Pakistan, and Sri Lanka were analyzed using a two-stage Heckman model to examine the factors influencing the decision to use liquefied petroleum gas (stage 1) and, among users, the quantity consumed per person (stage 2). In the first stage, liquefied petroleum gas selection in all six countries increased with household expenditure and the highest level of education attained by female and male household members. Electricity connection increased, and engagement in agriculture and increasing household size decreased, liquefied petroleum gas selection in five countries; urban residence increased selection in four countries; and rising firewood and kerosene prices increased selection in three countries each. In the second stage, the quantity of liquefied petroleum gas consumed increased with rising household expenditure and decreasing price of liquefied petroleum gas in every country. Urban residence increased and engagement in agriculture decreased liquefied petroleum gas consumption. Surveys in Albania, Brazil, Mexico, and Peru, which did not report quantities, were also examined by calculating quantities using national average prices. Although fuel prices faced by individual households could not be tested, the findings largely supported those from the first six countries. Once the education levels of men and women were separately accounted for, the gender of the head of household was not statistically significant in most cases across the ten countries. Where it was significant (five equations), the sign of the coefficient was positive for men, possibly suggesting that female-headed households are burdened with unmeasured economic disadvantages, making less cash available for purchasing liquefied petroleum gas.Energy Production and Transportation,Markets and Market Access,Energy Conservation&Efficiency,Renewable Energy,Energy and Environment

    Testing Emergent Gravity on Galaxy Cluster Scales

    Get PDF
    Verlinde's theory of Emergent Gravity (EG) describes gravity as an emergent phenomenon rather than a fundamental force. Applying this reasoning in de Sitter space leads to gravity behaving differently on galaxy and galaxy cluster scales; this excess gravity might offer an alternative to dark matter. Here we test these ideas using the data from the Coma cluster and from 58 stacked galaxy clusters. The X-ray surface brightness measurements of the clusters at 0.1<z<1.20.1 < z < 1.2 along with the weak lensing data are used to test the theory. We find that the simultaneous EG fits of the X-ray and weak lensing datasets are significantly worse than those provided by General Relativity (with cold dark matter). For the Coma cluster, the predictions from Emergent Gravity and General Relativity agree in the range of 250 - 700 kpc, while at around 1 Mpc scales, EG total mass predictions are larger by a factor of 2. For the cluster stack the predictions are only in good agreement at around the 1 - 2 Mpc scales, while for r≳10r \gtrsim 10 Mpc EG is in strong tension with the data. According to the Bayesian information criterion analysis, GR is preferred in all tested datasets; however, we also discuss possible modifications of EG that greatly relax the tension with the data.Comment: 19 pages, 5 figures, 5 tables, accepted for publication on JCA

    Measuring the possibilities of interfuel substitution

    Get PDF
    What are the costs of making consumption of production activities use less-polluting fuels? The author reviews how the fuel mix used by different industries has changed over time and examines 2 techniques for estimating the responsiveness of fuel demand to fuel prices: econometric models and the engineering approach. With econometric models, the elasticity of substitution between energy and other inputs determines the costs of making activities less energy-intensive, while the elasticity of substitution between sources of energy (interfuel substitutability) determines the marginal costs of replacing one energy source with another. The engineering approach uses more detailed technical information and can draw a more complete picture, but with less ability to inform about activities with a vast number of different economic agents. Among the author's main conclusions: There are surprisingly large variations in energy and fuel use over time and between countries. Industrial output increased 62 percent in OECD countries between 1971 and 1988, for example, while energy use stayed unchanged. Also, shares of energy sources for industry and electricity vary greatly with local availablity, indicating that these sectors have some flexibility in choice of energy source. A judgment on whether this variability indicates that an economy responds cheaply if energy prices are changed selectively depends on how one reads the more detailed studies in the econometric and engineering literature. Lack of data is the biggest problem in estimating fuel and energy substitutability in non-OECD countries. Engineering studies of fuel switching in industry are rarely available. They exist, however, for the power industry and could be used to estimate the costs of alternative fuel-mixes for particular greenfield sites. The technique could not be used for assessment of economywide policies. Econometric studies are useful inasmuch as they take a sector- or economywide perspective. Econometric techniques are challenging, but often represent the state of the art in providing reliable estimates for elasticies of substitution - particularly when data are scarce and the level of aggregation is high. The issue of whether econometrically estimated structural parameters can be transferred across borders has not been thoroughly investigated.Oil Refining&Gas Industry,Transport and Environment,Energy and Poverty Alleviation,Energy and Environment,Airports and Air Services

    Atomic displacements accompanying deformation twinning: shears and shuffles

    Get PDF
    Deformation twins grow by the motion of disconnections along their interfaces, thereby coupling shear with migration. Atomic-scale simulations of this mechanism have advanced to the point where the trajectory of each atom can be followed as it transits from a site in the shrinking grain, through the interface, and onwards to a site in the growing twin. Historically, such trajectories have been factorised into shear and shuffle components according to some defined convention. In the present article, we introduce a method of factorisation consistent with disconnection motion. This procedure is illustrated for the case of {10-12} twinning in hcp materials, and shown to agree with simulated atomic trajectories for Zr.Peer ReviewedPostprint (published version

    Testing chameleon gravity with the Coma cluster

    Get PDF
    We propose a novel method to test the gravitational interactions in the outskirts of galaxy clusters. When gravity is modified, this is typically accompanied by the introduction of an additional scalar degree of freedom, which mediates an attractive fifth force. The presence of an extra gravitational coupling, however, is tightly constrained by local measurements. In chameleon modifications of gravity, local tests can be evaded by employing a screening mechanism that suppresses the fifth force in dense environments. While the chameleon field may be screened in the interior of the cluster, its outer region can still be affected by the extra force, introducing a deviation between the hydrostatic and lensing mass of the cluster. Thus, the chameleon modification can be tested by combining the gas and lensing measurements of the cluster. We demonstrate the operability of our method with the Coma cluster, for which both a lensing measurement and gas observations from the X-ray surface brightness, the X-ray temperature, and the Sunyaev-Zel'dovich effect are available. Using the joint observational data set, we perform a Markov chain Monte Carlo analysis of the parameter space describing the different profiles in both the Newtonian and chameleon scenarios. We report competitive constraints on the chameleon field amplitude and its coupling strength to matter. In the case of f(R) gravity, corresponding to a specific choice of the coupling, we find an upper bound on the background field amplitude of |f_{R0}|<6*10^{-5}, which is currently the tightest constraint on cosmological scales.Comment: 27 pages, 8 figures, version accepted for publication in JCA

    Fold-Back: Using emerging technologies to move from quality assurance to quality enhancement

    Get PDF
    &lt;p&gt;Emerging technologies offer an opportunity for the development, at the institutional level, of quality processes with greater capacity to enhance learning in higher education than available through current quality processes. These systems offer the potential to extend use of learning analytics in institutional-level quality processes in addition to the widespread focus on business analytics, and to deliver well-constructed mixes of information from different data sources. Borrowed from music amplification, the term &lt;em&gt;fold-back&lt;/em&gt; is proposed as a way to describe such a mix. This paper begins the design-research project of designing effective fold-back systems by expanding the theoretical assumptions about learning embedded in higher education quality processes. A number of theories building on Vygotsky’s cultural-historical approach are discussed to imagine quality in higher education in terms of what students actually do and how they engage in addition to what the institution does. The discussion is summarised in a fold-back matrix capturing the sorts of evaluation questions the systems might address. The paper concludes by providing two initial design sketches for re-purposing emerging technologies with the capacity to support expanded quality processes in education. These sketches are based on the Experience Application Programming Interface (xAPI) and Dedoose technologies.&lt;/p&gt;</jats:p

    The effects of velocities and lensing on moments of the Hubble diagram

    Get PDF
    We consider the dispersion on the supernova distance-redshift relation due to peculiar velocities and gravitational lensing, and the sensitivity of these effects to the amplitude of the matter power spectrum. We use the MeMo lensing likelihood developed by Quartin, Marra & Amendola (2014), which accounts for the characteristic non-Gaussian distribution caused by lensing magnification with measurements of the first four central moments of the distribution of magnitudes. We build on the MeMo likelihood by including the effects of peculiar velocities directly into the model for the moments. In order to measure the moments from sparse numbers of supernovae, we take a new approach using Kernel Density Estimation to estimate the underlying probability density function of the magnitude residuals. We also describe a bootstrap re-sampling approach to estimate the data covariance matrix. We then apply the method to the Joint Light-curve Analysis (JLA) supernova catalogue. When we impose only that the intrinsic dispersion in magnitudes is independent of redshift, we find σ8=0.44−0.44+0.63\sigma_8=0.44^{+0.63}_{-0.44} at the one standard deviation level, although we note that in tests on simulations, this model tends to overestimate the magnitude of the intrinsic dispersion, and underestimate σ8\sigma_8. We note that the degeneracy between intrinsic dispersion and the effects of σ8\sigma_8 is more pronounced when lensing and velocity effects are considered simultaneously, due to a cancellation of redshift dependence when both effects are included. Keeping the model of the intrinsic dispersion fixed as a Gaussian distribution of width 0.14 mag, we find σ8=1.07−0.76+0.50\sigma_8 = 1.07^{+0.50}_{-0.76}.Comment: 16 pages, updated to match version accepted in MNRA
    • …
    corecore