3,495 research outputs found

    Modeling Binary Time Series Using Gaussian Processes with Application to Predicting Sleep States

    Full text link
    Motivated by the problem of predicting sleep states, we develop a mixed effects model for binary time series with a stochastic component represented by a Gaussian process. The fixed component captures the effects of covariates on the binary-valued response. The Gaussian process captures the residual variations in the binary response that are not explained by covariates and past realizations. We develop a frequentist modeling framework that provides efficient inference and more accurate predictions. Results demonstrate the advantages of improved prediction rates over existing approaches such as logistic regression, generalized additive mixed model, models for ordinal data, gradient boosting, decision tree and random forest. Using our proposed model, we show that previous sleep state and heart rates are significant predictors for future sleep states. Simulation studies also show that our proposed method is promising and robust. To handle computational complexity, we utilize Laplace approximation, golden section search and successive parabolic interpolation. With this paper, we also submit an R-package (HIBITS) that implements the proposed procedure.Comment: Journal of Classification (2018

    SensibleSleep: A Bayesian Model for Learning Sleep Patterns from Smartphone Events

    Get PDF
    We propose a Bayesian model for extracting sleep patterns from smartphone events. Our method is able to identify individuals' daily sleep periods and their evolution over time, and provides an estimation of the probability of sleep and wake transitions. The model is fitted to more than 400 participants from two different datasets, and we verify the results against ground truth from dedicated armband sleep trackers. We show that the model is able to produce reliable sleep estimates with an accuracy of 0.89, both at the individual and at the collective level. Moreover the Bayesian model is able to quantify uncertainty and encode prior knowledge about sleep patterns. Compared with existing smartphone-based systems, our method requires only screen on/off events, and is therefore much less intrusive in terms of privacy and more battery-efficient

    Classification of newborn EEG maturity with Bayesian averaging over decision trees

    Get PDF
    EEG experts can assess a newborn’s brain maturity by visual analysis of age-related patterns in sleep EEG. It is highly desirable to make the results of assessment most accurate and reliable. However, the expert analysis is limited in capability to provide the estimate of uncertainty in assessments. Bayesian inference has been shown providing the most accurate estimates of uncertainty by using Markov Chain Monte Carlo (MCMC) integration over the posterior distribution. The use of MCMC enables to approximate the desired distribution by sampling the areas of interests in which the density of distribution is high. In practice, the posterior distribution can be multimodal, and so that the existing MCMC techniques cannot provide the proportional sampling from the areas of interest. The lack of prior information makes MCMC integration more difficult when a model parameter space is large and cannot be explored in detail within a reasonable time. In particular, the lack of information about EEG feature importance can affect the results of Bayesian assessment of EEG maturity. In this paper we explore how the posterior information about EEG feature importance can be used to reduce a negative influence of disproportional sampling on the results of Bayesian assessment. We found that the MCMC integration tends to oversample the areas in which a model parameter space includes one or more features, the importance of which counted in terms of their posterior use is low. Using this finding, we proposed to cure the results of MCMC integration and then described the results of testing the proposed method on a set of sleep EEG recordings

    Stochasticity from function -- why the Bayesian brain may need no noise

    Get PDF
    An increasing body of evidence suggests that the trial-to-trial variability of spiking activity in the brain is not mere noise, but rather the reflection of a sampling-based encoding scheme for probabilistic computing. Since the precise statistical properties of neural activity are important in this context, many models assume an ad-hoc source of well-behaved, explicit noise, either on the input or on the output side of single neuron dynamics, most often assuming an independent Poisson process in either case. However, these assumptions are somewhat problematic: neighboring neurons tend to share receptive fields, rendering both their input and their output correlated; at the same time, neurons are known to behave largely deterministically, as a function of their membrane potential and conductance. We suggest that spiking neural networks may, in fact, have no need for noise to perform sampling-based Bayesian inference. We study analytically the effect of auto- and cross-correlations in functionally Bayesian spiking networks and demonstrate how their effect translates to synaptic interaction strengths, rendering them controllable through synaptic plasticity. This allows even small ensembles of interconnected deterministic spiking networks to simultaneously and co-dependently shape their output activity through learning, enabling them to perform complex Bayesian computation without any need for noise, which we demonstrate in silico, both in classical simulation and in neuromorphic emulation. These results close a gap between the abstract models and the biology of functionally Bayesian spiking networks, effectively reducing the architectural constraints imposed on physical neural substrates required to perform probabilistic computing, be they biological or artificial
    • …
    corecore