53 research outputs found

    Optimality in multiple comparison procedures

    Full text link
    When many (m) null hypotheses are tested with a single dataset, the control of the number of false rejections is often the principal consideration. Two popular controlling rates are the probability of making at least one false discovery (FWER) and the expected fraction of false discoveries among all rejections (FDR). Scaled multiple comparison error rates form a new family that bridges the gap between these two extremes. For example, the Scaled Expected Value (SEV) limits the number of false positives relative to an arbitrary increasing function of the number of rejections, that is, E(FP/s(R)). We discuss the problem of how to choose in practice which procedure to use, with elements of an optimality theory, by considering the number of false rejections FP separately from the number of correct rejections TP. Using this framework we will show how to choose an element in the new family mentioned above.Comment: arXiv admin note: text overlap with arXiv:1112.451

    Structural Correlates of Personality Dimensions in Healthy Aging and MCI

    Get PDF
    The revised NEO Personality Inventory (NEOPI-R), popularly known as the five-factor model, defines five personality factors: Neuroticism, Extraversion, Openness to Experience, Agreeableness, and Conscientiousness. The structural correlates of these personality factors are still a matter of debate. In this work, we examine the impact of subtle cognitive deficits on structural substrates of personality in the elderly using DTI derived white matter (WM) integrity measure, Fractional Anisotropy (FA). We employed canonical correlation analysis (CCA) to study the relationship between personality factors of the NEOPI-R and FA measures in two population groups: healthy controls and MCI. Agreeableness was the only personality factor to be associated with FA patterns in both groups. Openness was significantly related to FA data in the MCI group and the inverse was true for Conscientiousness. Furthermore, we generated saliency maps using bootstrapping strategy which revealed a larger number of positive correlations in healthy aging in contrast to the MCI status. The MCI group was found to be associated with a predominance of negative correlations indicating that higher Agreeableness and Openness scores were mostly related to lower FA values in interhemispheric and cortico-spinal tracts and a limited number of higher FA values in cortico-cortical and cortico-subcortical connection. Altogether these findings support the idea that WM microstructure may represent a valid correlate of personality dimensions and also indicate that the presence of early cognitive deficits led to substantial changes in the associations between WM integrity and personality factors

    Structural Brain Connectivity in School-Age Preterm Infants Provides Evidence for Impaired Networks Relevant for Higher Order Cognitive Skills and Social Cognition

    Get PDF
    Extreme prematurity and pregnancy conditions leading to intrauterine growth restriction (IUGR) affect thousands of newborns every year and increase their risk for poor higher order cognitive and social skills at school age. However, little is known about the brain structural basis of these disabilities. To compare the structural integrity of neural circuits between prematurely born controls and children born extreme preterm (EP) or with IUGR at school age, long-ranging and short-ranging connections were noninvasively mapped across cortical hemispheres by connection matrices derived from diffusion tensor tractography. Brain connectivity was modeled along fiber bundles connecting 83 brain regions by a weighted characterization of structural connectivity (SC). EP and IUGR subjects, when compared with controls, had decreased fractional anisotropy-weighted SC (FAw-SC) of cortico-basal ganglia-thalamo-cortical loop connections while cortico-cortical association connections showed both decreased and increased FAw-SC. FAw-SC strength of these connections was associated with poorer socio-cognitive performance in both EP and IUGR childre

    Adaptive Strategy for the Statistical Analysis of Connectomes

    Get PDF
    We study an adaptive statistical approach to analyze brain networks represented by brain connection matrices of interregional connectivity (connectomes). Our approach is at a middle level between a global analysis and single connections analysis by considering subnetworks of the global brain network. These subnetworks represent either the inter-connectivity between two brain anatomical regions or by the intra-connectivity within the same brain anatomical region. An appropriate summary statistic, that characterizes a meaningful feature of the subnetwork, is evaluated. Based on this summary statistic, a statistical test is performed to derive the corresponding p-value. The reformulation of the problem in this way reduces the number of statistical tests in an orderly fashion based on our understanding of the problem. Considering the global testing problem, the p-values are corrected to control the rate of false discoveries. Finally, the procedure is followed by a local investigation within the significant subnetworks. We contrast this strategy with the one based on the individual measures in terms of power. We show that this strategy has a great potential, in particular in cases where the subnetworks are well defined and the summary statistics are properly chosen. As an application example, we compare structural brain connection matrices of two groups of subjects with a 22q11.2 deletion syndrome, distinguished by their IQ scores

    Multiple Comparison Procedures for Large Correlated Data with Application to Brain Connectivity Analysis

    No full text
    Recent developments in medical imaging and image analysis allow the determination of interregional brain connectivity through diffusion Magnetic Resonance Imaging (MRI) tractography. The brain connectivity is represented by a connection matrix where each cell represents a certain measure of connectivity between two regions of interest of the brain. An important question related to this challenging field of neuroscience is the ability to learn from connection matrices and to perform rigorous statistical analysis to derive new medical results. In particular, we focus on the level of brain regions of interest. The brain connectivity analysis involves the problem of multiplicity correction or the so-called multiple testing or multiple comparisons problem, which is the principal subject of this thesis. The work described in this thesis can be divided into three essential parts. First, the general problem of multiple comparisons. The objective of this part is to develop new multiple comparisons procedures that have optimal behavior. More precisely, we propose a comprehensive family of error rates together with a corresponding family of multiple comparison procedures. The new family generalizes almost all existing error rates. Second, the problem of multiple comparisons for positively dependent data. By supposing that the brain regions of interest that are in the same vicinity or that belong to the same functional subnetwork are positively correlated, we develop new multiple comparison strategies that exploit this additional information in order to increase sensitivity for detecting real effects. Finally, the application of multiple comparisons to brain connectivity matrices. This part is an application of the two previous parts. In addition, we adapt the statistical methods to brain connectivity analysis by estimating the structure of positive dependence. This third part can be seen as a validation of the statistical methods developed in the first and the second part. The thesis represents a complete framework of an adaptive strategy for the statistical analysis of both functional and structural connection matrices ready to be used in single subject analysis or group comparison. The framework is general enough to be used not only in neuroscience but also in many other research domains. In particular, in the brain connectivity context, it will permit an efficient investigation of a large range of pathologies where changes in brain connectivity are a key ingredient of medical interpretation, such as Alzheimer or Schizophrenia

    OPTIMAL MULTIPLE TESTS

    No full text
    When many (m) null hypotheses are tested with a single dataset, the control of the number of false rejections is often the principal consideration. Two popular controlling rates are the probability of making at least one false discovery (Bonferroni) and the expected fraction of false discoveries among all rejections (Benjamini-Hochberg). Methods controlling these rates based on the ordered p-values p(1) , ... p(m) are well-known. In this talk, we present a new family of multiple testing procedures that bridges the gap between these two extremes. We also discuss the problem of how to choose in practice which procedure to use. This choice depends on the number of tests m, the likely size of the alternative effects and the fraction of true nulls m0 among the m null hypotheses. In the literature this is mostly dealt with by optimizing the number of rejections subject to a bound on a particular controlling rate. This is analogous to the Neyman-Pearson approach of bounding the probability of a false rejection and then, given this constraint, maximizing the power. But since there is no agreement on the choice of control in multiple testing, the analogy is not convincing. This approach does not allow one to compare across a spectrum of controls. Clearly controlling the false discovery rate, for example, can potentially lead to many rejections and is in this sense powerful, but how should this be compared to a method that controls the probability of making at least one erroneous rejection? On can make progress in this question, by considering the number of false rejections F separately from the number of correct rejections T. Using this framework we will show how to choose an element in the new family mentioned above

    Some Thoughts on Robustness and Multiple Testing

    No full text
    When testing many null hypotheses, deciding which of them to reject is a subtle game. The prevailing approach consists in deciding on a level and type of control against false rejections (errors of type I) and subject to this constraint to maximize the number of rejections. The two extreme types of control are the FWER (family-wise error rate, probability of at least one false rejection) and FDR (false discovery rate, expected rate of false rejections among all rejections). In this talk, I will discuss two topics. First, how to construct alternatives to these two procedures together with elements of an optimality theory and, second, some considerations about robustness. The FWER and the FDR detect alternatives by comparing the ordered p-values to a boundary; a constant equal to =m for FWER and a boundary that is linearly growing in the rank r equal to r=m for FDR. We will show that if the boundary is equal to s(r)=m, a certain control is implied. By choosing Huber’s clipped linear function s(r) = min(k; r) a family of multiple tests, bridging FWER and FDR is created. How to choose k will also be discussed. The theory of control is based on the fact that the p-values for the true hypotheses are uniformly distributed. If the test is approximate and the nomial p-value is not equal to the real one, this no longer holds. In this case, the control is also merely nominal. We will discuss some examples to illustrate this point and to measure the likely effects on the conclusions of a multiple testing procedure
    • 

    corecore