4,707 research outputs found

    Polling bias and undecided voter allocations: US Presidential elections, 2004 - 2016

    Full text link
    Accounting for undecided and uncertain voters is a challenging issue for predicting election results from public opinion polls. Undecided voters typify the uncertainty of swing voters in polls but are often ignored or allocated to each candidate in a simple, deterministic manner. Historically this may have been adequate because the undecided were comparatively small enough to assume that they do not affect the relative proportions of the decided voters. However, in the presence of high numbers of undecided voters, these static rules may in fact bias election predictions from election poll authors and meta-poll analysts. In this paper, we examine the effect of undecided voters in the 2016 US presidential election to the previous three presidential elections. We show there were a relatively high number of undecided voters over the campaign and on election day, and that the allocation of undecided voters in this election was not consistent with two-party proportional (or even) allocations. We find evidence that static allocation regimes are inadequate for election prediction models and that probabilistic allocations may be superior. We also estimate the bias attributable to polling agencies, often referred to as "house effects".Comment: 32 pages, 9 figures, 6 table

    Cosmology with the lights off: Standard sirens in the Einstein Telescope era

    Full text link
    We explore the prospects for constraining cosmology using gravitational-wave (GW) observations of neutron-star binaries by the proposed Einstein Telescope (ET), exploiting the narrowness of the neutron-star mass function. Double neutron-star (DNS) binaries are expected to be one of the first sources detected after "first-light" of Advanced LIGO and are expected to be detected at a rate of a few tens per year in the advanced era. However the proposed ET could catalog tens of thousands per year. Combining the measured source redshift distributions with GW-network distance determinations will permit not only the precision measurement of background cosmological parameters, but will provide an insight into the astrophysical properties of these DNS systems. Of particular interest will be to probe the distribution of delay times between DNS-binary creation and subsequent merger, as well as the evolution of the star-formation rate density within ET's detection horizon. Keeping H_0, \Omega_{m,0} and \Omega_{\Lambda,0} fixed and investigating the precision with which the dark-energy equation-of-state parameters could be recovered, we found that with 10^5 detected DNS binaries we could constrain these parameters to an accuracy similar to forecasted constraints from future CMB+BAO+SNIa measurements. Furthermore, modeling the merger delay-time distribution as a power-law, and the star-formation rate (SFR) density as a parametrized version of the Porciani and Madau SF2 model, we find that the associated astrophysical parameters are constrained to within ~ 10%. All parameter precisions scaled as 1/sqrt(N), where N is the number of cataloged detections. We also investigated how precisions varied with the intrinsic underlying properties of the Universe and with the distance reach of the network (which may be affected by the low-frequency cutoff of the detector).Comment: 24 pages, 11 figures, 6 tables. Minor changes to reflect published version. References updated and correcte

    Radiative neutron capture on a proton at BBN energies

    Full text link
    The total cross section for radiative neutron capture on a proton, np→dγnp \to d \gamma, is evaluated at big bang nucleosynthesis (BBN) energies. The electromagnetic transition amplitudes are calculated up to next-to leading order within the framework of pionless effective field theory with dibaryon fields. We also calculate the dγ→npd\gamma\to np cross section and the photon analyzing power for the dγ⃗→npd\vec{\gamma}\to np process from the amplitudes. The values of low energy constants that appear in the amplitudes are estimated by a Markov Chain Monte Carlo analysis using the relevant low energy experimental data. Our result agrees well with those of other theoretical calculations except for the np→dγnp\to d\gamma cross section at some energies estimated by an R-matrix analysis. We also study the uncertainties in our estimation of the np→dγnp\to d\gamma cross section at relevant BBN energies and find that the estimated cross section is reliable to within ∼\sim1% error.Comment: 21 pages and 12 eps figures; 6 eps figures and 2 references added, and accepted for publication in Phys. Rev.

    Western Region Renewable Energy Markets: Implications for the Bureau of Land Management

    Get PDF
    The purpose of this analysis is to provide the U.S. Department of the Interior (DOI) and the Bureau of Land Management (BLM) with an overview of renewable energy (RE) generation markets, transmission planning efforts, and the ongoing role of the BLM RE projects in the electricity markets of the 11 states (Arizona, California, Colorado, Idaho, Montana, Nevada, New Mexico, Oregon, Utah, Washington, and Wyoming) that comprise the Western Electricity Coordinating Council (WECC) Region. This analysis focuses on the status of, and projections for, likely development of non-hydroelectric renewable electricity from solar (including photovoltaic [PV] and concentrating solar power [CSP]), wind, biomass and geothermal resources in these states. Absent new policy drivers and without the extension of the DOE loan guarantee program and Treasury's 1603 program, state RPS requirements are likely to remain a primary driver for new RE deployment in the western United States. Assuming no additional policy incentives are implemented, projected RE demand for the WECC states by 2020 is 134,000 GWh. Installed capacity to meet that demand will need to be within the range of 28,000-46,000 MW

    Harold Jeffreys's Theory of Probability Revisited

    Full text link
    Published exactly seventy years ago, Jeffreys's Theory of Probability (1939) has had a unique impact on the Bayesian community and is now considered to be one of the main classics in Bayesian Statistics as well as the initiator of the objective Bayes school. In particular, its advances on the derivation of noninformative priors as well as on the scaling of Bayes factors have had a lasting impact on the field. However, the book reflects the characteristics of the time, especially in terms of mathematical rigor. In this paper we point out the fundamental aspects of this reference work, especially the thorough coverage of testing problems and the construction of both estimation and testing noninformative priors based on functional divergences. Our major aim here is to help modern readers in navigating in this difficult text and in concentrating on passages that are still relevant today.Comment: This paper commented in: [arXiv:1001.2967], [arXiv:1001.2968], [arXiv:1001.2970], [arXiv:1001.2975], [arXiv:1001.2985], [arXiv:1001.3073]. Rejoinder in [arXiv:0909.1008]. Published in at http://dx.doi.org/10.1214/09-STS284 the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    A Bayesian approach to the follow-up of candidate gravitational wave signals

    Full text link
    Ground-based gravitational wave laser interferometers (LIGO, GEO-600, Virgo and Tama-300) have now reached high sensitivity and duty cycle. We present a Bayesian evidence-based approach to the search for gravitational waves, in particular aimed at the followup of candidate events generated by the analysis pipeline. We introduce and demonstrate an efficient method to compute the evidence and odds ratio between different models, and illustrate this approach using the specific case of the gravitational wave signal generated during the inspiral phase of binary systems, modelled at the leading quadrupole Newtonian order, in synthetic noise. We show that the method is effective in detecting signals at the detection threshold and it is robust against (some types of) instrumental artefacts. The computational efficiency of this method makes it scalable to the analysis of all the triggers generated by the analysis pipelines to search for coalescing binaries in surveys with ground-based interferometers, and to a whole variety of signal waveforms, characterised by a larger number of parameters.Comment: 9 page

    A TV-Gaussian prior for infinite-dimensional Bayesian inverse problems and its numerical implementations

    Get PDF
    Many scientific and engineering problems require to perform Bayesian inferences in function spaces, in which the unknowns are of infinite dimension. In such problems, choosing an appropriate prior distribution is an important task. In particular we consider problems where the function to infer is subject to sharp jumps which render the commonly used Gaussian measures unsuitable. On the other hand, the so-called total variation (TV) prior can only be defined in a finite dimensional setting, and does not lead to a well-defined posterior measure in function spaces. In this work we present a TV-Gaussian (TG) prior to address such problems, where the TV term is used to detect sharp jumps of the function, and the Gaussian distribution is used as a reference measure so that it results in a well-defined posterior measure in the function space. We also present an efficient Markov Chain Monte Carlo (MCMC) algorithm to draw samples from the posterior distribution of the TG prior. With numerical examples we demonstrate the performance of the TG prior and the efficiency of the proposed MCMC algorithm

    Bayesian coherent analysis of in-spiral gravitational wave signals with a detector network

    Full text link
    The present operation of the ground-based network of gravitational-wave laser interferometers in "enhanced" configuration brings the search for gravitational waves into a regime where detection is highly plausible. The development of techniques that allow us to discriminate a signal of astrophysical origin from instrumental artefacts in the interferometer data and to extract the full range of information are some of the primary goals of the current work. Here we report the details of a Bayesian approach to the problem of inference for gravitational wave observations using a network of instruments, for the computation of the Bayes factor between two hypotheses and the evaluation of the marginalised posterior density functions of the unknown model parameters. The numerical algorithm to tackle the notoriously difficult problem of the evaluation of large multi-dimensional integrals is based on a technique known as Nested Sampling, which provides an attractive alternative to more traditional Markov-chain Monte Carlo (MCMC) methods. We discuss the details of the implementation of this algorithm and its performance against a Gaussian model of the background noise, considering the specific case of the signal produced by the in-spiral of binary systems of black holes and/or neutron stars, although the method is completely general and can be applied to other classes of sources. We also demonstrate the utility of this approach by introducing a new coherence test to distinguish between the presence of a coherent signal of astrophysical origin in the data of multiple instruments and the presence of incoherent accidental artefacts, and the effects on the estimation of the source parameters as a function of the number of instruments in the network.Comment: 22 page

    Exploring universality in nuclear clusters with Halo EFT

    Full text link
    I present results and highlight aspects of halo EFT to loosely bound systems composed of nucleons and alpha particles, with emphasis on Coulomb interactions.Comment: 3 pages, 2 figures, talk given at the 21th European Conference on Few-Body Problems in Physics, Salamanca, Aug. 29th - Sep. 3rd, 201

    Determination of the Cosmic Distance Scale from Sunyaev-Zel'dovich Effect and Chandra X-ray Measurements of High Redshift Galaxy Clusters

    Full text link
    We determine the distance to 38 clusters of galaxies in the redshift range 0.14 < z < 0.89 using X-ray data from Chandra and Sunyaev-Zeldovich Effect data from the Owens Valley Radio Observatory and the Berkeley-Illinois-Maryland Association interferometric arrays. The cluster plasma and dark matter distributions are analyzed using a hydrostatic equilibrium model that accounts for radial variations in density, temperature and abundance, and the statistical and systematic errors of this method are quantified. The analysis is performed via a Markov chain Monte Carlo technique that provides simultaneous estimation of all model parameters. We measure a Hubble constant of 76.9 +3.9-3.4 +10.0-8.0 km/s/Mpc (statistical followed by systematic uncertainty at 68% confidence) for an Omega_M=0.3, Omega_Lambda=0.7 cosmology. We also analyze the data using an isothermal beta model that does not invoke the hydrostatic equilibrium assumption, and find H_0=73.7 +4.6-3.8 +9.5-7.6 km/s/Mpc; to avoid effects from cool cores in clusters, we repeated this analysis excluding the central 100 kpc from the X-ray data, and find H_0=77.6 +4.8-4.3 +10.1-8.2 km/s/Mpc. The consistency between the models illustrates the relative insensitivity of SZE/X-ray determinations of H_0 to the details of the cluster model. Our determination of the Hubble parameter in the distant universe agrees with the recent measurement from the Hubble Space Telescope key project that probes the nearby universe.Comment: ApJ submitted (revised version
    • …
    corecore