334,196 research outputs found

    Statistical unfolding of elementary particle spectra: Empirical Bayes estimation and bias-corrected uncertainty quantification

    Full text link
    We consider the high energy physics unfolding problem where the goal is to estimate the spectrum of elementary particles given observations distorted by the limited resolution of a particle detector. This important statistical inverse problem arising in data analysis at the Large Hadron Collider at CERN consists in estimating the intensity function of an indirectly observed Poisson point process. Unfolding typically proceeds in two steps: one first produces a regularized point estimate of the unknown intensity and then uses the variability of this estimator to form frequentist confidence intervals that quantify the uncertainty of the solution. In this paper, we propose forming the point estimate using empirical Bayes estimation which enables a data-driven choice of the regularization strength through marginal maximum likelihood estimation. Observing that neither Bayesian credible intervals nor standard bootstrap confidence intervals succeed in achieving good frequentist coverage in this problem due to the inherent bias of the regularized point estimate, we introduce an iteratively bias-corrected bootstrap technique for constructing improved confidence intervals. We show using simulations that this enables us to achieve nearly nominal frequentist coverage with only a modest increase in interval length. The proposed methodology is applied to unfolding the ZZ boson invariant mass spectrum as measured in the CMS experiment at the Large Hadron Collider.Comment: Published at http://dx.doi.org/10.1214/15-AOAS857 in the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org). arXiv admin note: substantial text overlap with arXiv:1401.827

    Reducing bias and quantifying uncertainty in watershed flux estimates: the R package loadflex

    Get PDF
    Many ecological insights into the function of rivers and watersheds emerge from quantifying the flux of solutes or suspended materials in rivers. Numerous methods for flux estimation have been described, and each has its strengths and weaknesses. Currently, the largest practical challenges in flux estimation are to select among these methods and to implement or apply whichever method is chosen. To ease this process of method selection and application, we have written an R software package called loadflex that implements several of the most popular methods for flux estimation, including regressions, interpolations, and the special case of interpolation known as the period-weighted approach. Our package also implements a lesser-known and empirically promising approach called the “composite method,” to which we have added an algorithm for estimating prediction uncertainty. Here we describe the structure and key features of loadflex, with a special emphasis on the rationale and details of our composite method implementation. We then demonstrate the use of loadflex by fitting four different models to nitrate data from the Lamprey River in southeastern New Hampshire, where two large floods in 2006–2007 are hypothesized to have driven a long-term shift in nitrate concentrations and fluxes from the watershed. The models each give believable estimates, and yet they yield different answers for whether and how the floods altered nitrate loads. In general, the best modeling approach for each new dataset will depend on the specific site and solute of interest, and researchers need to make an informed choice among the many possible models. Our package addresses this need by making it simple to apply and compare multiple load estimation models, ultimately allowing researchers to estimate riverine concentrations and fluxes with greater ease and accuracy

    Bridge Simulation and Metric Estimation on Landmark Manifolds

    Full text link
    We present an inference algorithm and connected Monte Carlo based estimation procedures for metric estimation from landmark configurations distributed according to the transition distribution of a Riemannian Brownian motion arising from the Large Deformation Diffeomorphic Metric Mapping (LDDMM) metric. The distribution possesses properties similar to the regular Euclidean normal distribution but its transition density is governed by a high-dimensional PDE with no closed-form solution in the nonlinear case. We show how the density can be numerically approximated by Monte Carlo sampling of conditioned Brownian bridges, and we use this to estimate parameters of the LDDMM kernel and thus the metric structure by maximum likelihood

    Score, Pseudo-Score and Residual Diagnostics for Spatial Point Process Models

    Full text link
    We develop new tools for formal inference and informal model validation in the analysis of spatial point pattern data. The score test is generalized to a "pseudo-score" test derived from Besag's pseudo-likelihood, and to a class of diagnostics based on point process residuals. The results lend theoretical support to the established practice of using functional summary statistics, such as Ripley's KK-function, when testing for complete spatial randomness; and they provide new tools such as the compensator of the KK-function for testing other fitted models. The results also support localization methods such as the scan statistic and smoothed residual plots. Software for computing the diagnostics is provided.Comment: Published in at http://dx.doi.org/10.1214/11-STS367 the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    A Phenomenological Analysis of Gluon Mass Effects in Inclusive Radiative Decays of the J/ψ\rm{J/\psi} and $\Upsilon

    Full text link
    The shapes of the inclusive photon spectra in the processes \Jp \to \gamma X and \Up \to \gamma X have been analysed using all available experimental data. Relativistic, higher order QCD and gluon mass corrections were taken into account in the fitted functions. Only on including the gluon mass corrections, were consistent and acceptable fits obtained. Values of 0.7210.068+0.0160.721^{+0.016}_{-0.068} GeV and 1.180.29+0.091.18^{+0.09}_{-0.29} GeV were found for the effective gluon masses (corresponding to Born level diagrams) for the \Jp and \Up respectively. The width ratios \Gamma(V \to {\rm hadrons})/\Gamma(V \to \gamma+ {\rm hadrons}) V=\Jp, \Up were used to determine αs(1.5GeV)\alpha_s(1.5 {\rm GeV}) and αs(4.9GeV)\alpha_s(4.9 {\rm GeV}). Values consistent with the current world average αs\alpha_s were obtained only when gluon mass correction factors, calculated using the fitted values of the effective gluon mass, were applied. A gluon mass 1\simeq 1 GeV, as suggested with these results, is consistent with previous analytical theoretical calculations and independent phenomenological estimates, as well as with a recent, more accurate, lattice calculation of the gluon propagator in the infra-red region.Comment: 50 pages, 11 figures, 15 table

    The Monte Carlo Program KoralW version 1.51 and The Concurrent Monte Carlo KoralW&YFSWW3 with All Background Graphs and First Order Corrections to W-Pair Production

    Get PDF
    The version 1.51 of the Monte Carlo (MC) program KoralW for all e+ef1fˉ2f3fˉ4e^+e^-\to f_1\bar f_2 f_3\bar f_4 processes is presented. The most important change since the previous version 1.42 is the facility for writing MC events on the mass storage device and re-processing them later on. In the re-processing one may modify parameters of the Standard Model in order to fit them to experimental data. Another important new feature is a possibility of including complete O(α){\cal O}(\alpha) corrections to double-resonant W-pair component-processes in addition to all background (non-WW) graphs. The inclusion is done with the help of the YFSWW3 MC event generator for fully exclusive differential distributions (event-per-event). Technically, it is done in such a way that YFSWW3 runs concurrently with KoralW as a separate slave process, reading momenta of the MC event generated by KoralW and returning the correction weight to KoralW. KoralW introduces the O(α){\cal O}(\alpha) correction using this weight, and finishes processing the event (rejection due to total MC weight, hadronization, etc.). The communication between KoralW and YFSWW3 is done with the help of the FIFO facility of the UNIX/Linux operating system. This does not require any modifications of the FORTRAN source codes. The resulting Concurrent MC event generator KoralW&YFSWW3 looks from the user's point of view as a regular single MC event generator with all the standard features.Comment: 8 figures, 5 tables, submitted to Comput. Phys. Commu
    corecore