3,760 research outputs found

    Region-Referenced Spectral Power Dynamics of EEG Signals: A Hierarchical Modeling Approach

    Full text link
    Functional brain imaging through electroencephalography (EEG) relies upon the analysis and interpretation of high-dimensional, spatially organized time series. We propose to represent time-localized frequency domain characterizations of EEG data as region-referenced functional data. This representation is coupled with a hierarchical modeling approach to multivariate functional observations. Within this familiar setting, we discuss how several prior models relate to structural assumptions about multivariate covariance operators. An overarching modeling framework, based on infinite factorial decompositions, is finally proposed to balance flexibility and efficiency in estimation. The motivating application stems from a study of implicit auditory learning, in which typically developing (TD) children, and children with autism spectrum disorder (ASD) were exposed to a continuous speech stream. Using the proposed model, we examine differential band power dynamics as brain function is interrogated throughout the duration of a computer-controlled experiment. Our work offers a novel look at previous findings in psychiatry, and provides further insights into the understanding of ASD. Our approach to inference is fully Bayesian and implemented in a highly optimized Rcpp package

    A hierarchical model to estimate the abundance and biomass of salmonids by using removal sampling and biometric data from multiple locations

    Get PDF
    We present a Bayesian hierarchical model to estimate the abundance and the biomass of brown trout (Salmo trutta fario) by using removal sampling and biometric data collected at several stream sections. The model accounts for (i) variability of the abundance with fish length (as a distribution mixture), (ii) spatial variability of the abundance, (iii) variability of the catchability with fish length (as a logit regression model), (iv) spatial variability of the catchability, and (v) residual variability of the catchability with fish. Model measured variables are the areas of the stream sections as well as the length and the weight of the caught fish. We first test the model by using a simulated dataset before using a 3-location, 2-removal sampling dataset collected in the field. Fifteen model alternatives are compared with an index of complexity and fit by using the field dataset. The selected model accounts for variability of the abundance with fish length and stream section and variability of the catchability with fish length. By using the selected model, 95% credible interval estimates of the abundances at the three stream sections are (0.46,0.59), (0.90,1.07), and (0.56,0.69) fish/m2. Respective biomass estimates are (9.68, 13.58), (17.22, 22.71), and (12.69, 17.31) g/m2

    Overcoming the data crisis in biodiversity conservation

    Get PDF
    How can we track population trends when monitoring data are sparse? Population declines can go undetected, despite ongoing threats. For example, only one of every 200 harvested species are monitored. This gap leads to uncertainty about the seriousness of declines and hampers effective conservation. Collecting more data is important, but we can also make better use of existing information. Prior knowledge of physiology, life history, and community ecology can be used to inform population models. Additionally, in multispecies models, information can be shared among taxa based on phylogenetic, spatial, or temporal proximity. By exploiting generalities across species that share evolutionary or ecological characteristics within Bayesian hierarchical models, we can fill crucial gaps in the assessment of species’ status with unparalleled quantitative rigor

    Sequential Bayesian updating for Big Data

    Get PDF
    The velocity, volume, and variety of big data present both challenges and opportunities for cognitive science. We introduce sequential Bayesian updat-ing as a tool to mine these three core properties. In the Bayesian approach, we summarize the current state of knowledge regarding parameters in terms of their posterior distributions, and use these as prior distributions when new data become available. Crucially, we construct posterior distributions in such a way that we avoid having to repeat computing the likelihood of old data as new data become available, allowing the propagation of information without great computational demand. As a result, these Bayesian methods allow continuous inference on voluminous information streams in a timely manner. We illustrate the advantages of sequential Bayesian updating with data from the MindCrowd project, in which crowd-sourced data are used to study Alzheimer’s Dementia. We fit an extended LATER (Linear Ap-proach to Threshold with Ergodic Rate) model to reaction time data from the project in order to separate two distinct aspects of cognitive functioning: speed of information accumulation and caution

    Bayesian hierarchical modelling of size spectra

    Get PDF
    A fundamental pattern in ecology is that smaller organisms are more abundant than larger organisms. This pattern is known as the individual size distribution (ISD), which is the frequency distribution of all individual body sizes in an ecosystem. The ISD is described by a power law and a major goal of size spectra analyses is to estimate the exponent of the power law, λ. However, while numerous methods have been developed to do this, they have focused almost exclusively on estimating λ from single samples. Here, we develop an extension of the truncated Pareto distribution within the probabilistic modelling language Stan. We use it to estimate multiple λs simultaneously in a hierarchical modelling approach. The most important result is the ability to examine hypotheses related to size spectra, including the assessment of fixed and random effects, within a single Bayesian generalized mixed model. While the example here uses size spectra, the technique can also be generalized to any data that follow a power law distribution

    Beyond subjective and objective in statistics

    Full text link
    We argue that the words "objectivity" and "subjectivity" in statistics discourse are used in a mostly unhelpful way, and we propose to replace each of them with broader collections of attributes, with objectivity replaced by transparency, consensus, impartiality, and correspondence to observable reality, and subjectivity replaced by awareness of multiple perspectives and context dependence. The advantage of these reformulations is that the replacement terms do not oppose each other. Instead of debating over whether a given statistical method is subjective or objective (or normatively debating the relative merits of subjectivity and objectivity in statistical practice), we can recognize desirable attributes such as transparency and acknowledgment of multiple perspectives as complementary goals. We demonstrate the implications of our proposal with recent applied examples from pharmacology, election polling, and socioeconomic stratification.Comment: 35 page

    Dirichlet process mixture models for non-stationary data streams

    Get PDF
    In recent years, we have seen a handful of work on inference algorithms over non-stationary data streams. Given their flexibility, Bayesian non-parametric models are a good candidate for these scenarios. However, reliable streaming inference under the concept drift phenomenon is still an open problem for these models. In this work, we propose a variational inference algorithm for Dirichlet process mixture models. Our proposal deals with the concept drift by including an exponential forgetting over the prior global parameters. Our algorithm allows adapting the learned model to the concept drifts automatically. We perform experiments in both synthetic and real data, showing that the proposed model outperforms state-of-the-art variational methods in density estimation, clustering and parameter tracking

    Backwards is the way forward: feedback in the cortical hierarchy predicts the expected future

    Get PDF
    Clark offers a powerful description of the brain as a prediction machine, which offers progress on two distinct levels. First, on an abstract conceptual level, it provides a unifying framework for perception, action, and cognition (including subdivisions such as attention, expectation, and imagination). Second, hierarchical prediction offers progress on a concrete descriptive level for testing and constraining conceptual elements and mechanisms of predictive coding models (estimation of predictions, prediction errors, and internal models)
    • …
    corecore