9,990 research outputs found

    Evaluation of second-level inference in fMRI analysis

    Get PDF
    We investigate the impact of decisions in the second-level (i.e., over subjects) inferential process in functional magnetic resonance imaging on (1) the balance between false positives and false negatives and on (2) the data-analytical stability, both proxies for the reproducibility of results. Second-level analysis based on a mass univariate approach typically consists of 3 phases. First, one proceeds via a general linear model for a test image that consists of pooled information from different subjects. We evaluate models that take into account first-level (within-subjects) variability and models that do not take into account this variability. Second, one proceeds via inference based on parametrical assumptions or via permutation-based inference. Third, we evaluate 3 commonly used procedures to address the multiple testing problem: familywise error rate correction, False Discovery Rate (FDR) correction, and a two-step procedure with minimal cluster size. Based on a simulation study and real data we find that the two-step procedure with minimal cluster size results in most stable results, followed by the familywise error rate correction. The FDR results in most variable results, for both permutation-based inference and parametrical inference. Modeling the subject-specific variability yields a better balance between false positives and false negatives when using parametric inference

    Increasing power for voxel-wise genome-wide association studies : the random field theory, least square kernel machines and fast permutation procedures

    Get PDF
    Imaging traits are thought to have more direct links to genetic variation than diagnostic measures based on cognitive or clinical assessments and provide a powerful substrate to examine the influence of genetics on human brains. Although imaging genetics has attracted growing attention and interest, most brain-wide genome-wide association studies focus on voxel-wise single-locus approaches, without taking advantage of the spatial information in images or combining the effect of multiple genetic variants. In this paper we present a fast implementation of voxel- and cluster-wise inferences based on the random field theory to fully use the spatial information in images. The approach is combined with a multi-locus model based on least square kernel machines to associate the joint effect of several single nucleotide polymorphisms (SNP) with imaging traits. A fast permutation procedure is also proposed which significantly reduces the number of permutations needed relative to the standard empirical method and provides accurate small p-value estimates based on parametric tail approximation. We explored the relation between 448,294 single nucleotide polymorphisms and 18,043 genes in 31,662 voxels of the entire brain across 740 elderly subjects from the Alzheimer's Disease Neuroimaging Initiative (ADNI). Structural MRI scans were analyzed using tensor-based morphometry (TBM) to compute 3D maps of regional brain volume differences compared to an average template image based on healthy elderly subjects. We find method to be more sensitive compared with voxel-wise single-locus approaches. A number of genes were identified as having significant associations with volumetric changes. The most associated gene was GRIN2B, which encodes the N-methyl-d-aspartate (NMDA) glutamate receptor NR2B subunit and affects both the parietal and temporal lobes in human brains. Its role in Alzheimer's disease has been widely acknowledged and studied, suggesting the validity of the approach. The various advantages over existing approaches indicate a great potential offered by this novel framework to detect genetic influences on human brains

    Evolution of statistical analysis in empirical software engineering research: Current state and steps forward

    Full text link
    Software engineering research is evolving and papers are increasingly based on empirical data from a multitude of sources, using statistical tests to determine if and to what degree empirical evidence supports their hypotheses. To investigate the practices and trends of statistical analysis in empirical software engineering (ESE), this paper presents a review of a large pool of papers from top-ranked software engineering journals. First, we manually reviewed 161 papers and in the second phase of our method, we conducted a more extensive semi-automatic classification of papers spanning the years 2001--2015 and 5,196 papers. Results from both review steps was used to: i) identify and analyze the predominant practices in ESE (e.g., using t-test or ANOVA), as well as relevant trends in usage of specific statistical methods (e.g., nonparametric tests and effect size measures) and, ii) develop a conceptual model for a statistical analysis workflow with suggestions on how to apply different statistical methods as well as guidelines to avoid pitfalls. Lastly, we confirm existing claims that current ESE practices lack a standard to report practical significance of results. We illustrate how practical significance can be discussed in terms of both the statistical analysis and in the practitioner's context.Comment: journal submission, 34 pages, 8 figure

    A one-way ANOVA test for functional data with graphical interpretation

    Get PDF
    A new functional ANOVA test, with a graphical interpretation of the result, is presented. The test is an extension of the global envelope test introduced by Myllymaki et al. (2017, Global envelope tests for spatial processes, J. R. Statist. Soc. B 79, 381--404, doi: 10.1111/rssb.12172). The graphical interpretation is realized by a global envelope which is drawn jointly for all samples of functions. If a mean function computed from the empirical data is out of the given envelope, the null hypothesis is rejected with the predetermined significance level Ī±\alpha. The advantages of the proposed one-way functional ANOVA are that it identifies the domains of the functions which are responsible for the potential rejection. We introduce two versions of this test: the first gives a graphical interpretation of the test results in the original space of the functions and the second immediately offers a post-hoc test by identifying the significant pair-wise differences between groups. The proposed tests rely on discretization of the functions, therefore the tests are also applicable in the multidimensional ANOVA problem. In the empirical part of the article, we demonstrate the use of the method by analyzing fiscal decentralization in European countries. The aim of the empirical analysis is to capture differences between the levels of government expenditure decentralization ratio among different groups of European countries. The idea behind, based on the existing literature, is straightforward: countries with a longer European integration history are supposed to decentralize more of their government expenditure. We use the government expenditure centralization ratios of 29 European Union and EFTA countries in period from 1995 to 2016 sorted into three groups according to the presumed level of European economic and political integration.Comment: arXiv admin note: text overlap with arXiv:1506.0164
    • ā€¦
    corecore