219 research outputs found

    Experimental Design Modulates Variance in BOLD Activation: The Variance Design General Linear Model

    Full text link
    Typical fMRI studies have focused on either the mean trend in the blood-oxygen-level-dependent (BOLD) time course or functional connectivity (FC). However, other statistics of the neuroimaging data may contain important information. Despite studies showing links between the variance in the BOLD time series (BV) and age and cognitive performance, a formal framework for testing these effects has not yet been developed. We introduce the Variance Design General Linear Model (VDGLM), a novel framework that facilitates the detection of variance effects. We designed the framework for general use in any fMRI study by modeling both mean and variance in BOLD activation as a function of experimental design. The flexibility of this approach allows the VDGLM to i) simultaneously make inferences about a mean or variance effect while controlling for the other and ii) test for variance effects that could be associated with multiple conditions and/or noise regressors. We demonstrate the use of the VDGLM in a working memory application and show that engagement in a working memory task is associated with whole-brain decreases in BOLD variance.Comment: 18 pages, 7 figure

    Bayesian Online Learning for Consensus Prediction

    Full text link
    Given a pre-trained classifier and multiple human experts, we investigate the task of online classification where model predictions are provided for free but querying humans incurs a cost. In this practical but under-explored setting, oracle ground truth is not available. Instead, the prediction target is defined as the consensus vote of all experts. Given that querying full consensus can be costly, we propose a general framework for online Bayesian consensus estimation, leveraging properties of the multivariate hypergeometric distribution. Based on this framework, we propose a family of methods that dynamically estimate expert consensus from partial feedback by producing a posterior over expert and model beliefs. Analyzing this posterior induces an interpretable trade-off between querying cost and classification performance. We demonstrate the efficacy of our framework against a variety of baselines on CIFAR-10H and ImageNet-16H, two large-scale crowdsourced datasets
    • …
    corecore