651 research outputs found

    Statistical Analysis of fMRI Time-Series: A Critical Review of the GLM Approach.

    Get PDF
    Functional magnetic resonance imaging (fMRI) is one of the most widely used tools to study the neural underpinnings of human cognition. Standard analysis of fMRI data relies on a general linear model (GLM) approach to separate stimulus induced signals from noise. Crucially, this approach relies on a number of assumptions about the data which, for inferences to be valid, must be met. The current paper reviews the GLM approach to analysis of fMRI time-series, focusing in particular on the degree to which such data abides by the assumptions of the GLM framework, and on the methods that have been developed to correct for any violation of those assumptions. Rather than biasing estimates of effect size, the major consequence of non-conformity to the assumptions is to introduce bias into estimates of the variance, thus affecting test statistics, power, and false positive rates. Furthermore, this bias can have pervasive effects on both individual subject and group-level statistics, potentially yielding qualitatively different results across replications, especially after the thresholding procedures commonly used for inference-making

    The permafrost carbon inventory on the Tibetan Plateau : a new evaluation using deep sediment cores

    Get PDF
    Acknowledgements We are grateful for Dr. Jens Strauss and the other two anonymous reviewers for their insightful comments on an earlier version of this MS, and appreciate members of the IBCAS Sampling Campaign Teams for their assistance in field investigation. This work was supported by the National Basic Research Program of China on Global Change (2014CB954001 and 2015CB954201), National Natural Science Foundation of China (31322011 and 41371213), and the Thousand Young Talents Program.Peer reviewedPostprin

    New aspects of statistical methods for missing data problems, with applications in bioinformatics and genetics

    Get PDF
    As missing data problems become more commonplace in biological research and other areas, a method with relaxed assumptions while flexible enough to accommodate a wide range of situations is highly desired. We propose a nonparametric imputation method for data with missing values. The inference on the parameter defined by general estimating equations is performed using an empirical likelihood method. It is shown that the nonparametric imputation method together with empirical likelihood can reduce bias and improve efficiency of the estimate relative to inference using only complete cases of the dataset. The confidence regions obtained by empirical likelihood demonstrate good coverage properties. Since our method is valid under very weak assumptions while also possessing the flexibility inherent to estimating equations and empirical likelihood, it can be applied to a wide range of problems. An example is given using mouse eye weight and gene expression data;Missing data methods are also highly valuable from an experimental design point of view. We proposed a selective transcriptional profiling approach in improving the efficiency and affordability of genetical genomics research. The high cost of microarrays tends to limit the adoption of the standard genetical genomics approach. Our method is derived in a missing data framework, in which only a subset of objects are subjected to microarray experiments. It is shown that this approach can significantly reduce experimental cost while still achieving satisfactory power. To address the need for a nonparametric method, we developed empirical likelihood based inference for multi-sample comparison problems using data with surrogate variables. By applying this result to selective transcriptional profiling, we show that the idea of using relatively inexpensive trait data on extra individuals to improve the power of test for association between a QTL and gene transcriptional abundance also applies to the empirical likelihood based method

    Multi-parametric Imaging Using Hybrid PET/MR to Investigate the Epileptogenic Brain

    Get PDF
    Neuroimaging analysis has led to fundamental discoveries about the healthy and pathological human brain. Different imaging modalities allow garnering complementary information about brain metabolism, structure and function. To ensure that the integration of imaging data from these modalities is robust and reliable, it is fundamental to attain deep knowledge of each modality individually. Epilepsy, a neurological condition characterised by recurrent spontaneous seizures, represents a field in which applications of neuroimaging and multi-parametric imaging are particularly promising to guide diagnosis and treatment. In this PhD thesis, I focused on different imaging modalities and investigated advanced denoising and analysis strategies to improve their application to epilepsy. The first project focused on fluorodeoxyglucose (FDG) positron emission tomography (PET), a well-established imaging modality assessing brain metabolism, and aimed to develop a novel, semi-quantitative pipeline to analyse data in children with epilepsy, thus aiding presurgical planning. As pipelines for FDG-PET analysis in children are currently lacking, I developed age-appropriate templates to provide statistical parametric maps identifying epileptogenic areas on patient scans. The second and third projects focused on two magnetic resonance imaging (MRI) modalities: resting-state functional MRI (rs-fMRI) and arterial spin labelling (ASL), respectively. The aim was to i) probe the efficacy of different fMRI denoising pipelines, and ii) formally compare different ASL data acquisition strategies. In the former case, I compared different pre-processing methods and assessed their impact on fMRI signal quality and related functional connectivity analyses. In the latter case, I compared two ASL sequences to investigate their ability to quantify cerebral blood flow and interregional brain connectivity. The final project addressed the combination of rs-fMRI and ASL, and leveraged graph-theoretical analysis tools to i) compare metrics estimated via these two imaging modalities in healthy subjects and ii) assess topological changes captured by these modalities in a sample of temporal lobe epilepsy patients

    Expression QTLs Mapping and Analysis: A Bayesian Perspective.

    Get PDF
    The aim of expression Quantitative Trait Locus (eQTL) mapping is the identification of DNA sequence variants that explain variation in gene expression. Given the recent yield of trait-associated genetic variants identified by large-scale genome-wide association analyses (GWAS), eQTL mapping has become a useful tool to understand the functional context where these variants operate and eventually narrow down functional gene targets for disease. Despite its extensive application to complex (polygenic) traits and disease, the majority of eQTL studies still rely on univariate data modeling strategies, i.e., testing for association of all transcript-marker pairs. However these "one at-a-time" strategies are (1) unable to control the number of false-positives when an intricate Linkage Disequilibrium structure is present and (2) are often underpowered to detect the full spectrum of trans-acting regulatory effects. Here we present our viewpoint on the most recent advances on eQTL mapping approaches, with a focus on Bayesian methodology. We review the advantages of the Bayesian approach over frequentist methods and provide an empirical example of polygenic eQTL mapping to illustrate the different properties of frequentist and Bayesian methods. Finally, we discuss how multivariate eQTL mapping approaches have distinctive features with respect to detection of polygenic effects, accuracy, and interpretability of the results

    Mesolimbic confidence signals guide perceptual learning in the absence of external feedback

    Get PDF
    It is well established that learning can occur without external feedback, yet normative reinforcement learning theories have difficulties explaining such instances of learning. Here, we propose that human observers are capable of generating their own feedback signals by monitoring internal decision variables. We investigated this hypothesis in a visual perceptual learning task using fMRI and confidence reports as a measure for this monitoring process. Employing a novel computational model in which learning is guided by confidence-based reinforcement signals, we found that mesolimbic brain areas encoded both anticipation and prediction error of confidence—in remarkable similarity to previous findings for external reward-based feedback. We demonstrate that the model accounts for choice and confidence reports and show that the mesolimbic confidence prediction error modulation derived through the model predicts individual learning success. These results provide a mechanistic neurobiological explanation for learning without external feedback by augmenting reinforcement models with confidence-based feedback

    A view-based decision mechanism for rewards in the primate amygdala

    Get PDF
    Primates make decisions visually by shifting their view from one object to the next, comparing values between objects, and choosing the best reward, even before acting. Here, we show that when monkeys make value-guided choices, amygdala neurons encode their decisions in an abstract, purely internal representation defined by the monkey’s current view but not by specific object or reward properties. Across amygdala subdivisions, recorded activity patterns evolved gradually from an object-specific value code to a transient, object-independent code in which currently viewed and last-viewed objects competed to reflect the emerging view-based choice. Using neural-network modeling, we identified a sequence of computations by which amygdala neurons implemented view-based decision making and eventually recovered the chosen object’s identity when the monkeys acted on their choice. These findings reveal a neural mechanism in the amygdala that derives object choices from abstract, view-based computations, suggesting an efficient solution for decision problems with many objects
    corecore