385 research outputs found

    Targeting Conservation Investments in Heterogeneous Landscapes: A distance function approach and application to watershed management

    Get PDF
    To achieve a given level of an environmental amenity at least cost, decision-makers must integrate information about spatially variable biophysical and economic conditions. Although the biophysical attributes that contribute to supplying an environmental amenity are often known, the way in which these attributes interact to produce the amenity is often unknown. Given the difficulty in converting multiple attributes into a unidimensional physical measure of an environmental amenity (e.g., habitat quality), analyses in the academic literature tend to use a single biophysical attribute as a proxy for the environmental amenity (e.g., species richness). A narrow focus on a single attribute, however, fails to consider the full range of biophysical attributes that are critical to the supply of an environmental amenity. Drawing on the production efficiency literature, we introduce an alternative conservation targeting approach that relies on distance functions to cost-efficiently allocate conservation funds across a spatially heterogeneous landscape. An approach based on distance functions has the advantage of not requiring a parametric specification of the amenity function (or cost function), but rather only requiring that the decision-maker identify important biophysical and economic attributes. We apply the distance-function approach empirically to an increasingly common, but little studied, conservation initiative: conservation contracting for water quality objectives. The contract portfolios derived from the distance-function application have many desirable properties, including intuitive appeal, robust performance across plausible parametric amenity measures, and the generation of ranking measures that can be easily used by field practitioners in complex decision-making environments that cannot be completely modeled. Working Paper # 2002-01

    Unified dark energy models : a phenomenological approach

    Get PDF
    A phenomenological approach is proposed to the problem of universe accelerated expansion and of the dark energy nature. A general class of models is introduced whose energy density depends on the redshift zz in such a way that a smooth transition among the three main phases of the universe evolution (radiation era, matter domination, asymptotical de Sitter state) is naturally achieved. We use the estimated age of the universe, the Hubble diagram of Type Ia Supernovae and the angular size - redshift relation for compact and ultracompact radio structures to test whether the model is in agreement with astrophysical observation and to constrain its main parameters. Although phenomenologically motivated, the model may be straightforwardly interpreted as a two fluids scenario in which the quintessence is generated by a suitably chosen scalar field potential. On the other hand, the same model may also be read in the context of unified dark energy models or in the framework of modified Friedmann equation theories.Comment: 12 pages, 10 figures, accepted for publication on Physical Review

    Circular orbits of corotating binary black holes: comparison between analytical and numerical results

    Get PDF
    We compare recent numerical results, obtained within a ``helical Killing vector'' (HKV) approach, on circular orbits of corotating binary black holes to the analytical predictions made by the effective one body (EOB) method (which has been recently extended to the case of spinning bodies). On the scale of the differences between the results obtained by different numerical methods, we find good agreement between numerical data and analytical predictions for several invariant functions describing the dynamical properties of circular orbits. This agreement is robust against the post-Newtonian accuracy used for the analytical estimates, as well as under choices of resummation method for the EOB ``effective potential'', and gets better as one uses a higher post-Newtonian accuracy. These findings open the way to a significant ``merging'' of analytical and numerical methods, i.e. to matching an EOB-based analytical description of the (early and late) inspiral, up to the beginning of the plunge, to a numerical description of the plunge and merger. We illustrate also the ``flexibility'' of the EOB approach, i.e. the possibility of determining some ``best fit'' values for the analytical parameters by comparison with numerical data.Comment: Minor revisions, accepted for publication in Phys. Rev. D, 19 pages, 6 figure

    The Hamiltonian formulation of General Relativity: myths and reality

    Full text link
    A conventional wisdom often perpetuated in the literature states that: (i) a 3+1 decomposition of space-time into space and time is synonymous with the canonical treatment and this decomposition is essential for any Hamiltonian formulation of General Relativity (GR); (ii) the canonical treatment unavoidably breaks the symmetry between space and time in GR and the resulting algebra of constraints is not the algebra of four-dimensional diffeomorphism; (iii) according to some authors this algebra allows one to derive only spatial diffeomorphism or, according to others, a specific field-dependent and non-covariant four-dimensional diffeomorphism; (iv) the analyses of Dirac [Proc. Roy. Soc. A 246 (1958) 333] and of ADM [Arnowitt, Deser and Misner, in "Gravitation: An Introduction to Current Research" (1962) 227] of the canonical structure of GR are equivalent. We provide some general reasons why these statements should be questioned. Points (i-iii) have been shown to be incorrect in [Kiriushcheva et al., Phys. Lett. A 372 (2008) 5101] and now we thoroughly re-examine all steps of the Dirac Hamiltonian formulation of GR. We show that points (i-iii) above cannot be attributed to the Dirac Hamiltonian formulation of GR. We also demonstrate that ADM and Dirac formulations are related by a transformation of phase-space variables from the metric gμνg_{\mu\nu} to lapse and shift functions and the three-metric gkmg_{km}, which is not canonical. This proves that point (iv) is incorrect. Points (i-iii) are mere consequences of using a non-canonical change of variables and are not an intrinsic property of either the Hamilton-Dirac approach to constrained systems or Einstein's theory itself.Comment: References are added and updated, Introduction is extended, Subsection 3.5 is added, 83 pages; corresponds to the published versio

    Nucleation of a sodium droplet on C60

    Full text link
    We investigate theoretically the progressive coating of C60 by several sodium atoms. Density functional calculations using a nonlocal functional are performed for NaC60 and Na2C60 in various configurations. These data are used to construct an empirical atomistic model in order to treat larger sizes in a statistical and dynamical context. Fluctuating charges are incorporated to account for charge transfer between sodium and carbon atoms. By performing systematic global optimization in the size range 1<=n<=30, we find that Na_nC60 is homogeneously coated at small sizes, and that a growing droplet is formed above n=>8. The separate effects of single ionization and thermalization are also considered, as well as the changes due to a strong external electric field. The present results are discussed in the light of various experimental data.Comment: 17 pages, 10 figure

    Cosmological parameters from SDSS and WMAP

    Full text link
    We measure cosmological parameters using the three-dimensional power spectrum P(k) from over 200,000 galaxies in the Sloan Digital Sky Survey (SDSS) in combination with WMAP and other data. Our results are consistent with a ``vanilla'' flat adiabatic Lambda-CDM model without tilt (n=1), running tilt, tensor modes or massive neutrinos. Adding SDSS information more than halves the WMAP-only error bars on some parameters, tightening 1 sigma constraints on the Hubble parameter from h~0.74+0.18-0.07 to h~0.70+0.04-0.03, on the matter density from Omega_m~0.25+/-0.10 to Omega_m~0.30+/-0.04 (1 sigma) and on neutrino masses from <11 eV to <0.6 eV (95%). SDSS helps even more when dropping prior assumptions about curvature, neutrinos, tensor modes and the equation of state. Our results are in substantial agreement with the joint analysis of WMAP and the 2dF Galaxy Redshift Survey, which is an impressive consistency check with independent redshift survey data and analysis techniques. In this paper, we place particular emphasis on clarifying the physical origin of the constraints, i.e., what we do and do not know when using different data sets and prior assumptions. For instance, dropping the assumption that space is perfectly flat, the WMAP-only constraint on the measured age of the Universe tightens from t0~16.3+2.3-1.8 Gyr to t0~14.1+1.0-0.9 Gyr by adding SDSS and SN Ia data. Including tensors, running tilt, neutrino mass and equation of state in the list of free parameters, many constraints are still quite weak, but future cosmological measurements from SDSS and other sources should allow these to be substantially tightened.Comment: Minor revisions to match accepted PRD version. SDSS data and ppt figures available at http://www.hep.upenn.edu/~max/sdsspars.htm

    Assessing Big-Bang Nucleosynthesis

    Get PDF
    Systematic uncertainties in the light-element abundances and their evolution make a rigorous statistical assessment difficult. However, using Bayesian methods we show that the following statement is robust: the predicted and measured abundances are consistent with 95\% credibility only if the baryon-to-photon ratio is between 2×10102\times 10^{-10} and 6.5×10106.5\times 10^{-10} and the number of light neutrino species is less than 3.9. Our analysis suggests that the 4^4He abundance may have been systematically underestimated.Comment: 7 pages, LaTeX(2.09), 6 postscript figures (attached). A postscript version with figures can be found at ftp://astro.uchicago.edu/pub/astro/copi/assessing_BBN . (See the README file for details

    Dark Matter and Fundamental Physics with the Cherenkov Telescope Array

    Get PDF
    The Cherenkov Telescope Array (CTA) is a project for a next-generation observatory for very high energy (GeV-TeV) ground-based gamma-ray astronomy, currently in its design phase, and foreseen to be operative a few years from now. Several tens of telescopes of 2-3 different sizes, distributed over a large area, will allow for a sensitivity about a factor 10 better than current instruments such as H.E.S.S, MAGIC and VERITAS, an energy coverage from a few tens of GeV to several tens of TeV, and a field of view of up to 10 deg. In the following study, we investigate the prospects for CTA to study several science questions that influence our current knowledge of fundamental physics. Based on conservative assumptions for the performance of the different CTA telescope configurations, we employ a Monte Carlo based approach to evaluate the prospects for detection. First, we discuss CTA prospects for cold dark matter searches, following different observational strategies: in dwarf satellite galaxies of the Milky Way, in the region close to the Galactic Centre, and in clusters of galaxies. The possible search for spatial signatures, facilitated by the larger field of view of CTA, is also discussed. Next we consider searches for axion-like particles which, besides being possible candidates for dark matter may also explain the unexpectedly low absorption by extragalactic background light of gamma rays from very distant blazars. Simulated light-curves of flaring sources are also used to determine the sensitivity to violations of Lorentz Invariance by detection of the possible delay between the arrival times of photons at different energies. Finally, we mention searches for other exotic physics with CTA.Comment: (31 pages, Accepted for publication in Astroparticle Physics

    Dark Energy and Gravity

    Full text link
    I review the problem of dark energy focusing on the cosmological constant as the candidate and discuss its implications for the nature of gravity. Part 1 briefly overviews the currently popular `concordance cosmology' and summarises the evidence for dark energy. It also provides the observational and theoretical arguments in favour of the cosmological constant as the candidate and emphasises why no other approach really solves the conceptual problems usually attributed to the cosmological constant. Part 2 describes some of the approaches to understand the nature of the cosmological constant and attempts to extract the key ingredients which must be present in any viable solution. I argue that (i)the cosmological constant problem cannot be satisfactorily solved until gravitational action is made invariant under the shift of the matter lagrangian by a constant and (ii) this cannot happen if the metric is the dynamical variable. Hence the cosmological constant problem essentially has to do with our (mis)understanding of the nature of gravity. Part 3 discusses an alternative perspective on gravity in which the action is explicitly invariant under the above transformation. Extremizing this action leads to an equation determining the background geometry which gives Einstein's theory at the lowest order with Lanczos-Lovelock type corrections. (Condensed abstract).Comment: Invited Review for a special Gen.Rel.Grav. issue on Dark Energy, edited by G.F.R.Ellis, R.Maartens and H.Nicolai; revtex; 22 pages; 2 figure
    corecore