8,596 research outputs found

    Lognormal Distributions and Geometric Averages of Positive Definite Matrices

    Full text link
    This article gives a formal definition of a lognormal family of probability distributions on the set of symmetric positive definite (PD) matrices, seen as a matrix-variate extension of the univariate lognormal family of distributions. Two forms of this distribution are obtained as the large sample limiting distribution via the central limit theorem of two types of geometric averages of i.i.d. PD matrices: the log-Euclidean average and the canonical geometric average. These averages correspond to two different geometries imposed on the set of PD matrices. The limiting distributions of these averages are used to provide large-sample confidence regions for the corresponding population means. The methods are illustrated on a voxelwise analysis of diffusion tensor imaging data, permitting a comparison between the various average types from the point of view of their sampling variability.Comment: 28 pages, 8 figure

    Jet energy calibration at the LHC

    Full text link
    Jets are one of the most prominent physics signatures of high energy proton proton (p-p) collisions at the Large Hadron Collider (LHC). They are key physics objects for precision measurements and searches for new phenomena. This review provides an overview of the reconstruction and calibration of jets at the LHC during its first Run. ATLAS and CMS developed different approaches for the reconstruction of jets, but use similar methods for the energy calibration. ATLAS reconstructs jets utilizing input signals from their calorimeters and use charged particle tracks to refine their energy measurement and suppress the effects of multiple p-p interactions (pileup). CMS, instead, combines calorimeter and tracking information to build jets from particle flow objects. Jets are calibrated using Monte Carlo (MC) simulations and a residual in situ calibration derived from collision data is applied to correct for the differences in jet response between data and Monte Carlo. Large samples of dijet, Z+jets, and photon+jet events at the LHC allowed the calibration of jets with high precision, leading to very small systematic uncertainties. Both ATLAS and CMS achieved a jet energy calibration uncertainty of about 1% in the central detector region and for jets with transverse momentum pT>100 GeV. At low jet pT, the jet energy calibration uncertainty is less than 4%, with dominant contributions from pileup, differences in energy scale between quark and gluon jets, and jet flavor composition.Comment: Article submitted to the International Journal of Modern Physics A (IJMPA) as part of the special issue on the "Jet Measurements at the LHC", editor G. Dissertor

    Multiple Testing of Local Maxima for Detection of Peaks in Random Fields

    Full text link
    A topological multiple testing scheme is presented for detecting peaks in images under stationary ergodic Gaussian noise, where tests are performed at local maxima of the smoothed observed signals. The procedure generalizes the one-dimensional scheme of Schwartzman et al. (2011) to Euclidean domains of arbitrary dimension. Two methods are developed according to two different ways of computing p-values: (i) using the exact distribution of the height of local maxima (Cheng and Schwartzman, 2014), available explicitly when the noise field is isotropic; (ii) using an approximation to the overshoot distribution of local maxima above a pre-threshold (Cheng and Schwartzman, 2014), applicable when the exact distribution is unknown, such as when the stationary noise field is non-isotropic. The algorithms, combined with the Benjamini-Hochberg procedure for thresholding p-values, provide asymptotic strong control of the False Discovery Rate (FDR) and power consistency, with specific rates, as the search space and signal strength get large. The optimal smoothing bandwidth and optimal pre-threshold are obtained to achieve maximum power. Simulations show that FDR levels are maintained in non-asymptotic conditions. The methods are illustrated in a nanoscopy image analysis problem of detecting fluorescent molecules against the image background.Comment: 30 pages, 5 figures. arXiv admin note: text overlap with arXiv:1203.306

    Standardization of multivariate Gaussian mixture models and background adjustment of PET images in brain oncology

    Full text link
    In brain oncology, it is routine to evaluate the progress or remission of the disease based on the differences between a pre-treatment and a post-treatment Positron Emission Tomography (PET) scan. Background adjustment is necessary to reduce confounding by tissue-dependent changes not related to the disease. When modeling the voxel intensities for the two scans as a bivariate Gaussian mixture, background adjustment translates into standardizing the mixture at each voxel, while tumor lesions present themselves as outliers to be detected. In this paper, we address the question of how to standardize the mixture to a standard multivariate normal distribution, so that the outliers (i.e., tumor lesions) can be detected using a statistical test. We show theoretically and numerically that the tail distribution of the standardized scores is favorably close to standard normal in a wide range of scenarios while being conservative at the tails, validating voxelwise hypothesis testing based on standardized scores. To address standardization in spatially heterogeneous image data, we propose a spatial and robust multivariate expectation-maximization (EM) algorithm, where prior class membership probabilities are provided by transformation of spatial probability template maps and the estimation of the class mean and covariances are robust to outliers. Simulations in both univariate and bivariate cases suggest that standardized scores with soft assignment have tail probabilities that are either very close to or more conservative than standard normal. The proposed methods are applied to a real data set from a PET phantom experiment, yet they are generic and can be used in other contexts

    INFLATION TARGET ZONES AS A COMMITMENT MECHANISM

    Get PDF
    In a simple new keyenesian model of monetary policy under discretion constraining the Central Bank to put inflation within a pre-specified Inflation Target Zone can eliminate the inflation bias and, at least for certain parameter ranges, significantly reduce the stabilization bias. Also, it is possible to investigate what is the optimal Inflation Target Zone for different economies. These seem to depend of the structural parameters in a non-linear and often non-monotonic way.
    corecore