211,275 research outputs found

    Evaluation of second-level inference in fMRI analysis

    Get PDF
    We investigate the impact of decisions in the second-level (i.e., over subjects) inferential process in functional magnetic resonance imaging on (1) the balance between false positives and false negatives and on (2) the data-analytical stability, both proxies for the reproducibility of results. Second-level analysis based on a mass univariate approach typically consists of 3 phases. First, one proceeds via a general linear model for a test image that consists of pooled information from different subjects. We evaluate models that take into account first-level (within-subjects) variability and models that do not take into account this variability. Second, one proceeds via inference based on parametrical assumptions or via permutation-based inference. Third, we evaluate 3 commonly used procedures to address the multiple testing problem: familywise error rate correction, False Discovery Rate (FDR) correction, and a two-step procedure with minimal cluster size. Based on a simulation study and real data we find that the two-step procedure with minimal cluster size results in most stable results, followed by the familywise error rate correction. The FDR results in most variable results, for both permutation-based inference and parametrical inference. Modeling the subject-specific variability yields a better balance between false positives and false negatives when using parametric inference

    Data-analytical stability in second-level fMRI inference

    Get PDF
    We investigate the impact of decisions in the second-level (i.e. over subjects) inferential process in functional Magnetic Resonance Imaging (fMRI) on 1) the balance between false positives and false negatives and on 2) the data-analytical stability (Qiu et al., 2006; Roels et al., 2015), both proxies for the reproducibility of results. Second-level analysis based on a mass univariate approach typically consists of 3 phases. First, one proceeds via a general linear model for a test image that consists of pooled information from different subjects (Beckmann et al., 2003). We evaluate models that take into account first-level (within-subjects) variability and models that do not take into account this variability. Second, one proceeds via permutation-based inference or via inference based on parametrical assumptions (Holmes et al., 1996). Third, we evaluate 3 commonly used procedures to address the multiple testing problem: family-wise error rate correction, false discovery rate correction and a two-step procedure with minimal cluster size (Lieberman and Cunningham, 2009; Bennett et al., 2009). Based on a simulation study and on real data we find that the two-step procedure with minimal cluster-size results in most stable results, followed by the family- wise error rate correction. The false discovery rate results in most variable results, both for permutation-based inference and parametrical inference. Modeling the subject-specific variability yields a better balance between false positives and false negatives when using parametric inference

    Mixture of Bilateral-Projection Two-dimensional Probabilistic Principal Component Analysis

    Full text link
    The probabilistic principal component analysis (PPCA) is built upon a global linear mapping, with which it is insufficient to model complex data variation. This paper proposes a mixture of bilateral-projection probabilistic principal component analysis model (mixB2DPPCA) on 2D data. With multi-components in the mixture, this model can be seen as a soft cluster algorithm and has capability of modeling data with complex structures. A Bayesian inference scheme has been proposed based on the variational EM (Expectation-Maximization) approach for learning model parameters. Experiments on some publicly available databases show that the performance of mixB2DPPCA has been largely improved, resulting in more accurate reconstruction errors and recognition rates than the existing PCA-based algorithms

    Cluster sample inference using sensitivity analysis: the case with few groups

    Get PDF
    This paper re-examines inference for cluster samples. Sensitivity analysis is proposed as a new method to perform inference when the number of groups is small. Based on estimations using disaggregated data, the sensitivity of the standard errors with respect to the variance of the cluster effects can be examined in order to distinguish a causal effect from random shocks. The method even handles just-identified models. One important example of a just-identified model is the two groups and two time periods difference-in-differences setting. The method allows for different types of correlation over time and between groups in the cluster effects.Cluster-correlation; difference-in-difference; sensitivity analysis

    The mass and anisotropy profiles of galaxy clusters from the projected phase space density: testing the method on simulated data

    Full text link
    We present a new method of constraining the mass and velocity anisotropy profiles of galaxy clusters from kinematic data. The method is based on a model of the phase space density which allows the anisotropy to vary with radius between two asymptotic values. The characteristic scale of transition between these asymptotes is fixed and tuned to a typical anisotropy profile resulting from cosmological simulations. The model is parametrized by two values of anisotropy, at the centre of the cluster and at infinity, and two parameters of the NFW density profile, the scale radius and the scale mass. In order to test the performance of the method in reconstructing the true cluster parameters we analyze mock kinematic data for 20 relaxed galaxy clusters generated from a cosmological simulation of the standard LCDM model. We use Bayesian methods of inference and the analysis is carried out following the Markov Chain Monte Carlo approach. The parameters of the mass profile are reproduced quite well, but we note that the mass is typically underestimated by 15 percent, probably due to the presence of small velocity substructures. The constraints on the anisotropy profile for a single cluster are in general barely conclusive. Although the central asymptotic value is determined accurately, the outer one is subject to significant systematic errors caused by substructures at large clustercentric distance. The anisotropy profile is much better constrained if one performs joint analysis of at least a few clusters. In this case it is possible to reproduce the radial variation of the anisotropy over two decades in radius inside the virial sphere.Comment: 11 pages, 10 figures, accepted for publication in MNRA
    • 

    corecore