7,250 research outputs found

    Multivariate Hierarchical Frameworks for Modelling Delayed Reporting in Count Data

    Get PDF
    In many fields and applications count data can be subject to delayed reporting. This is where the total count, such as the number of disease cases contracted in a given week, may not be immediately available, instead arriving in parts over time. For short term decision making, the statistical challenge lies in predicting the total count based on any observed partial counts, along with a robust quantification of uncertainty. In this article we discuss previous approaches to modelling delayed reporting and present a multivariate hierarchical framework where the count generating process and delay mechanism are modelled simultaneously. Unlike other approaches, the framework can also be easily adapted to allow for the presence of under-reporting in the final observed count. To compare our approach with existing frameworks, one of which we extend to potentially improve predictive performance, we present a case study of reported dengue fever cases in Rio de Janeiro. Based on both within-sample and out-of-sample posterior predictive model checking and arguments of interpretability, adaptability, and computational efficiency, we discuss the advantages and disadvantages of each modelling framework.Comment: Biometrics (2019

    Polling bias and undecided voter allocations: US Presidential elections, 2004 - 2016

    Full text link
    Accounting for undecided and uncertain voters is a challenging issue for predicting election results from public opinion polls. Undecided voters typify the uncertainty of swing voters in polls but are often ignored or allocated to each candidate in a simple, deterministic manner. Historically this may have been adequate because the undecided were comparatively small enough to assume that they do not affect the relative proportions of the decided voters. However, in the presence of high numbers of undecided voters, these static rules may in fact bias election predictions from election poll authors and meta-poll analysts. In this paper, we examine the effect of undecided voters in the 2016 US presidential election to the previous three presidential elections. We show there were a relatively high number of undecided voters over the campaign and on election day, and that the allocation of undecided voters in this election was not consistent with two-party proportional (or even) allocations. We find evidence that static allocation regimes are inadequate for election prediction models and that probabilistic allocations may be superior. We also estimate the bias attributable to polling agencies, often referred to as "house effects".Comment: 32 pages, 9 figures, 6 table

    An Experimental Study of Storable Votes

    Get PDF
    The storable votes mechanism is a method of voting for committees that meet periodically to consider a series of binary decisions. Each member is allocated a fixed budget of votes to be cast as desired over the multiple decisions. Voters are induced to spend more votes on those decisions that matter to them most, shifting the ex ante probability of winning away from decisions they value less and towards decisions they value more, typically generating welfare gains over standard majority voting with non-storable votes. The equilibrium strategies have a very intuitive feature---the number of votes cast must be monotonic in the voter's intensity of preferences---but are otherwise difficult to calculate, raising questions of practical implementation. In our experiments, realized efficiency levels were remarkably close to theoretical equilibrium predictions, while subjects adopted monotonic but off-equilibrium strategies. We are lead to conclude that concerns about the complexity of the game may have limited practical relevance.

    Constraining Isocurvature Initial Conditions with WMAP 3-year data

    Get PDF
    We present constraints on the presence of isocurvature modes from the temperature and polarization CMB spectrum data from the WMAP satellite alone, and in combination with other datasets including SDSS galaxy survey and SNLS supernovae. We find that the inclusion of polarization data allows the WMAP data alone, as well as in combination with complementary observations, to place improved limits on the contribution of CDM and neutrino density isocurvature components individually. With general correlations, the upper limits on these sub-dominant isocurvature components are reduced to ~60% of the first year WMAP results, with specific limits depending on the type of fluctuations. If multiple isocurvature components are allowed, however, we find that the data still allow a majority of the initial power to come from isocurvature modes. As well as providing general constraints we also consider their interpretation in light of specific theoretical models like the curvaton and double inflation.Comment: 8 pages, 7 figures. Revised Sec 4 and Figs 3-4 post-publication to correct an error for models with varying isocurvature spectral inde

    Cosmology with the lights off: Standard sirens in the Einstein Telescope era

    Full text link
    We explore the prospects for constraining cosmology using gravitational-wave (GW) observations of neutron-star binaries by the proposed Einstein Telescope (ET), exploiting the narrowness of the neutron-star mass function. Double neutron-star (DNS) binaries are expected to be one of the first sources detected after "first-light" of Advanced LIGO and are expected to be detected at a rate of a few tens per year in the advanced era. However the proposed ET could catalog tens of thousands per year. Combining the measured source redshift distributions with GW-network distance determinations will permit not only the precision measurement of background cosmological parameters, but will provide an insight into the astrophysical properties of these DNS systems. Of particular interest will be to probe the distribution of delay times between DNS-binary creation and subsequent merger, as well as the evolution of the star-formation rate density within ET's detection horizon. Keeping H_0, \Omega_{m,0} and \Omega_{\Lambda,0} fixed and investigating the precision with which the dark-energy equation-of-state parameters could be recovered, we found that with 10^5 detected DNS binaries we could constrain these parameters to an accuracy similar to forecasted constraints from future CMB+BAO+SNIa measurements. Furthermore, modeling the merger delay-time distribution as a power-law, and the star-formation rate (SFR) density as a parametrized version of the Porciani and Madau SF2 model, we find that the associated astrophysical parameters are constrained to within ~ 10%. All parameter precisions scaled as 1/sqrt(N), where N is the number of cataloged detections. We also investigated how precisions varied with the intrinsic underlying properties of the Universe and with the distance reach of the network (which may be affected by the low-frequency cutoff of the detector).Comment: 24 pages, 11 figures, 6 tables. Minor changes to reflect published version. References updated and correcte

    Efficient computation of the first passage time distribution of the generalized master equation by steady-state relaxation

    Full text link
    The generalized master equation or the equivalent continuous time random walk equations can be used to compute the macroscopic first passage time distribution (FPTD) of a complex stochastic system from short-term microscopic simulation data. The computation of the mean first passage time and additional low-order FPTD moments can be simplified by directly relating the FPTD moment generating function to the moments of the local FPTD matrix. This relationship can be physically interpreted in terms of steady-state relaxation, an extension of steady-state flow. Moreover, it is amenable to a statistical error analysis that can be used to significantly increase computational efficiency. The efficiency improvement can be extended to the FPTD itself by modelling it using a Gamma distribution or rational function approximation to its Laplace transform

    Novel spectral kurtosis technology for adaptive vibration condition monitoring of multi-stage gearboxes

    Get PDF
    In this paper, the novel wavelet spectral kurtosis (WSK) technique is applied for the early diagnosis of gear tooth faults. Two variants of the wavelet spectral kurtosis technique, called variable resolution WSK and constant resolution WSK, are considered for the diagnosis of pitting gear faults. The gear residual signal, obtained by filtering the gear mesh frequencies, is used as the input to the SK algorithm. The advantages of using the wavelet-based SK techniques when compared to classical Fourier transform (FT)-based SK is confirmed by estimating the toothwise Fisher's criterion of diagnostic features. The final diagnosis decision is made by a three-stage decision-making technique based on the weighted majority rule. The probability of the correct diagnosis is estimated for each SK technique for comparison. An experimental study is presented in detail to test the performance of the wavelet spectral kurtosis techniques and the decision-making technique

    Global atmospheric circulation statistics: Four year averages

    Get PDF
    Four year averages of the monthly mean global structure of the general circulation of the atmosphere are presented in the form of latitude-altitude, time-altitude, and time-latitude cross sections. The numerical values are given in tables. Basic parameters utilized include daily global maps of temperature and geopotential height for 18 pressure levels between 1000 and 0.4 mb for the period December 1, 1978 through November 30, 1982 supplied by NOAA/NMC. Geopotential heights and geostrophic winds are constructed using hydrostatic and geostrophic formulae. Meridional and vertical velocities are calculated using thermodynamic and continuity equations. Fields presented in this report are zonally averaged temperature, zonal, meridional, and vertical winds, and amplitude of the planetary waves in geopotential height with zonal wave numbers 1-3. The northward fluxes of sensible heat and eastward momentum by the standing and transient eddies along with their wavenumber decomposition and Eliassen-Palm flux propagation vectors and divergences by the standing and transient eddies along with their wavenumber decomposition are also given. Large interhemispheric differences and year-to-year variations are found to originate in the changes in the planetary wave activity

    Radiative neutron capture on a proton at BBN energies

    Full text link
    The total cross section for radiative neutron capture on a proton, np→dγnp \to d \gamma, is evaluated at big bang nucleosynthesis (BBN) energies. The electromagnetic transition amplitudes are calculated up to next-to leading order within the framework of pionless effective field theory with dibaryon fields. We also calculate the dγ→npd\gamma\to np cross section and the photon analyzing power for the dγ⃗→npd\vec{\gamma}\to np process from the amplitudes. The values of low energy constants that appear in the amplitudes are estimated by a Markov Chain Monte Carlo analysis using the relevant low energy experimental data. Our result agrees well with those of other theoretical calculations except for the np→dγnp\to d\gamma cross section at some energies estimated by an R-matrix analysis. We also study the uncertainties in our estimation of the np→dγnp\to d\gamma cross section at relevant BBN energies and find that the estimated cross section is reliable to within ∼\sim1% error.Comment: 21 pages and 12 eps figures; 6 eps figures and 2 references added, and accepted for publication in Phys. Rev.

    On statistical uncertainty in nested sampling

    Full text link
    Nested sampling has emerged as a valuable tool for Bayesian analysis, in particular for determining the Bayesian evidence. The method is based on a specific type of random sampling of the likelihood function and prior volume of the parameter space. I study the statistical uncertainty in the evidence computed with nested sampling. I examine the uncertainty estimator from Skilling (2004, 2006) and introduce a new estimator based on a detailed analysis of the statistical properties of nested sampling. Both perform well in test cases and make it possible to obtain the statistical uncertainty in the evidence with no additional computational cost.Comment: 9 pages, 4 figures; accepted to MNRA
    • …
    corecore