1,878 research outputs found

    Semiparametric Estimation of Task-Based Dynamic Functional Connectivity on the Population Level

    Get PDF
    Dynamic functional connectivity (dFC) estimates time-dependent associations between pairs of brain region time series as typically acquired during functional MRI. dFC changes are most commonly quantified by pairwise correlation coefficients between the time series within a sliding window. Here, we applied a recently developed bootstrap-based technique (Kudela et al., 2017) to robustly estimate subject-level dFC and its confidence intervals in a task-based fMRI study (24 subjects who tasted their most frequently consumed beer and Gatorade as an appetitive control). We then combined information across subjects and scans utilizing semiparametric mixed models to obtain a group-level dFC estimate for each pair of brain regions, flavor, and the difference between flavors. The proposed approach relies on the estimated group-level dFC accounting for complex correlation structures of the fMRI data, multiple repeated observations per subject, experimental design, and subject-specific variability. It also provides condition-specific dFC and confidence intervals for the whole brain at the group level. As a summary dFC metric, we used the proportion of time when the estimated associations were either significantly positive or negative. For both flavors, our fully-data driven approach yielded regional associations that reflected known, biologically meaningful brain organization as shown in prior work, as well as closely resembled resting state networks (RSNs). Specifically, beer flavor-potentiated associations were detected between several reward-related regions, including the right ventral striatum (VST), lateral orbitofrontal cortex, and ventral anterior insular cortex (vAIC). The enhancement of right VST-vAIC association by a taste of beer independently validated the main activation-based finding (Oberlin et al., 2016). Most notably, our novel dFC methodology uncovered numerous associations undetected by the traditional static FC analysis. The data-driven, novel dFC methodology presented here can be used for a wide range of task-based fMRI designs to estimate the dFC at multiple levels-group-, individual-, and task-specific, utilizing a combination of well-established statistical methods

    Cluster Failure Revisited: Impact of First Level Design and Data Quality on Cluster False Positive Rates

    Full text link
    Methodological research rarely generates a broad interest, yet our work on the validity of cluster inference methods for functional magnetic resonance imaging (fMRI) created intense discussion on both the minutia of our approach and its implications for the discipline. In the present work, we take on various critiques of our work and further explore the limitations of our original work. We address issues about the particular event-related designs we used, considering multiple event types and randomisation of events between subjects. We consider the lack of validity found with one-sample permutation (sign flipping) tests, investigating a number of approaches to improve the false positive control of this widely used procedure. We found that the combination of a two-sided test and cleaning the data using ICA FIX resulted in nominal false positive rates for all datasets, meaning that data cleaning is not only important for resting state fMRI, but also for task fMRI. Finally, we discuss the implications of our work on the fMRI literature as a whole, estimating that at least 10% of the fMRI studies have used the most problematic cluster inference method (P = 0.01 cluster defining threshold), and how individual studies can be interpreted in light of our findings. These additional results underscore our original conclusions, on the importance of data sharing and thorough evaluation of statistical methods on realistic null data

    Disentangling causal webs in the brain using functional Magnetic Resonance Imaging: A review of current approaches

    Get PDF
    In the past two decades, functional Magnetic Resonance Imaging has been used to relate neuronal network activity to cognitive processing and behaviour. Recently this approach has been augmented by algorithms that allow us to infer causal links between component populations of neuronal networks. Multiple inference procedures have been proposed to approach this research question but so far, each method has limitations when it comes to establishing whole-brain connectivity patterns. In this work, we discuss eight ways to infer causality in fMRI research: Bayesian Nets, Dynamical Causal Modelling, Granger Causality, Likelihood Ratios, LiNGAM, Patel's Tau, Structural Equation Modelling, and Transfer Entropy. We finish with formulating some recommendations for the future directions in this area

    Inferential Modeling and Independent Component Analysis for Redundant Sensor Validation

    Get PDF
    The calibration of redundant safety critical sensors in nuclear power plants is a manual task that consumes valuable time and resources. Automated, data-driven techniques, to monitor the calibration of redundant sensors have been developed over the last two decades, but have not been fully implemented. Parity space methods such as the Instrumentation and Calibration Monitoring Program (ICMP) method developed by Electric Power Research Institute and other empirical based inferential modeling techniques have been developed but have not become viable options. Existing solutions to the redundant sensor validation problem have several major flaws that restrict their applications. Parity space method, such as ICMP, are not robust for low redundancy conditions and their operation becomes invalid when there are only two redundant sensors. Empirical based inferential modeling is only valid when intrinsic correlations between predictor variables and response variables remain static during the model training and testing phase. They also commonly produce high variance results and are not the optimal solution to the problem. This dissertation develops and implements independent component analysis (ICA) for redundant sensor validation. Performance of the ICA algorithm produces sufficiently low residual variance parameter estimates when compared to simple averaging, ICMP, and principal component regression (PCR) techniques. For stationary signals, it can detect and isolate sensor drifts for as few as two redundant sensors. It is fast and can be embedded into a real-time system. This is demonstrated on a water level control system. Additionally, ICA has been merged with inferential modeling technique such as PCR to reduce the prediction error and spillover effects from data anomalies. ICA is easy to use with, only the window size needing specification. The effectiveness and robustness of the ICA technique is shown through the use of actual nuclear power plant data. A bootstrap technique is used to estimate the prediction uncertainties and validate its usefulness. Bootstrap uncertainty estimates incorporate uncertainties from both data and the model. Thus, the uncertainty estimation is robust and varies from data set to data set. The ICA based system is proven to be accurate and robust; however, classical ICA algorithms commonly fail when distributions are multi-modal. This most likely occurs during highly non-stationary transients. This research also developed a unity check technique which indicates such failures and applies other, more robust techniques during transients. For linear trending signals, a rotation transform is found useful while standard averaging techniques are used during general transients

    Imputation Estimators Partially Correct for Model Misspecification

    Full text link
    Inference problems with incomplete observations often aim at estimating population properties of unobserved quantities. One simple way to accomplish this estimation is to impute the unobserved quantities of interest at the individual level and then take an empirical average of the imputed values. We show that this simple imputation estimator can provide partial protection against model misspecification. We illustrate imputation estimators' robustness to model specification on three examples: mixture model-based clustering, estimation of genotype frequencies in population genetics, and estimation of Markovian evolutionary distances. In the final example, using a representative model misspecification, we demonstrate that in non-degenerate cases, the imputation estimator dominates the plug-in estimate asymptotically. We conclude by outlining a Bayesian implementation of the imputation-based estimation.Comment: major rewrite, beta-binomial example removed, model based clustering is added to the mixture model example, Bayesian approach is now illustrated with the genetics exampl

    Contributions to Statistical Reproducibility and Small-Sample Bootstrap

    Get PDF
    This thesis consists of three contributions: an investigation of bootstrap methods for small samples, an overview of reproducibility, and advances on the topic of test reproducibility. These contributions are inspired by statistical practice in preclinical research. Small samples are a common feature in preclinical research. In this thesis, an extensive simulation study is carried out to explore whether bootstrap methods can perform well with such samples. This study compares four bootstrap methods: nonparametric predictive inference bootstrap, Banks bootstrap, Hutson bootstrap, and Efron bootstrap. The thesis concludes that bootstrap methods can provide a useful estimation and prediction inference for small samples. Some initial recommendations for practitioners are provided. There are no standardised definitions for reproducibility. This work further contributes to the existing literature by classifying reproducibility definitions from the literature into five types, and providing an overview of reproducibility with a focus on issues related to preclinical research, and on statistical reproducibility and its quantification. This research explores the variability of statistical methods from the statistical reproducibility perspective. It considers reproducibility as a predictive inference problem. The nonparametric predictive inference (NPI) method, which is focused on the prediction of future observations based on existing data, is applied. In this work, statistical reproducibility is defined as the probability of the event that, if the test was repeated under identical circumstances and with the same sample size, the same test outcome would be reached. This thesis presents contributions to NPI reproducibility for the t-test and the Wilcoxon-Mann Whitney test. As one of the prevailing patterns, a test statistic falling close to the test threshold leads to low reproducibility. In a preclinical test scenario, reproducibility of a final decision involving multiple pairwise comparisons is studied

    No wisdom in the crowd: genome annotation at the time of big data - current status and future prospects

    Get PDF
    Science and engineering rely on the accumulation and dissemination of knowledge to make discoveries and create new designs. Discovery-driven genome research rests on knowledge passed on via gene annotations. In response to the deluge of sequencing big data, standard annotation practice employs automated procedures that rely on majority rules. We argue this hinders progress through the generation and propagation of errors, leading investigators into blind alleys. More subtly, this inductive process discourages the discovery of novelty, which remains essential in biological research and reflects the nature of biology itself. Annotation systems, rather than being repositories of facts, should be tools that support multiple modes of inference. By combining deduction, induction and abduction, investigators can generate hypotheses when accurate knowledge is extracted from model databases. A key stance is to depart from ‘the sequence tells the structure tells the function’ fallacy, placing function first. We illustrate our approach with examples of critical or unexpected pathways, using MicroScope to demonstrate how tools can be implemented following the principles we advocate. We end with a challenge to the reader
    • …
    corecore