19 research outputs found

    A Bayesian model for identifying hierarchically organised states in neural population activity

    No full text
    Neural population activity in cortical circuits is not solely driven by external inputs, but is also modulated by endogenous states. These cortical states vary on multiple time-scales and also across areas and layers of the neocortex. To understand information processing in cortical circuits, we need to understand the statistical structure of internal states and their interaction with sensory inputs. Here, we present a statistical model for extracting hierarchically organized neural population states from multi-channel recordings of neural spiking activity. We model population states using a hidden Markov decision tree with state-dependent tuning parameters and a generalized linear observation model. Using variational Bayesian inference, we estimate the posterior distribution over parameters from population recordings of neural spike trains. On simulated data, we show that we can identify the underlying sequence of population states over time and reconstruct the ground truth parameters. Using extracellular population recordings from visual cortex, we find that a model with two levels of population states outperforms a generalized linear model which does not include state-dependence, as well as models which only including a binary state. Finally, modelling of state-dependence via our model also improves the accuracy with which sensory stimuli can be decoded from the population response

    Amortized inference in inverse problems

    No full text
    At the root of scientific discovery is the question of how to make sense of the world from empirical data. In practice, this question concerns how we can identify causal factors of a generative process from which we can only make noisy or limited observations. The task of finding these causal factors is called an inverse problem. We can find inverse problems in all scientific disciples ranging from physics, chemistry, and biology to medicine, psychology, and economics. Traditionally, inverse problems are tackled using methods from applied mathematics. Inherent to this tradition is the assumption of a rigorous treatment of those problems. While mathematical rigor appears elegant and desirable at first, it often comes with simplified assumptions and expensive computation. In recent years, machine learning has found applications in many computational fields due to the new dawn of artificial neural networks (ANNs). Central to this success is the idea that large sets of training data and automated non-linear feature extraction are a much more expressive approach to many problems than hand-designed features and algorithms. In this thesis Amortised inference in inverse problems, we present an approach that aims to leverage the success of machine learning, and deep learning in particular, for applications to inverse problems. The approach, which we call Recurrent Inference Machines (RIMs), is a general purpose framework for solving inverse problems. RIMs are parametric models that perform recurrent updates, modeling an iterative algorithm's structure. Throughout this work, we demonstrate that RIMs have found applications in various scientific disciplines such as medicine, astronomy, and seismology. We dedicate large parts of this thesis to applying RIMs to accelerated MRI in particular, a problem that aims to reduce measurement times in Magnetic Resonance Imaging significantly. We further propose Invertible Recurrent Inference Machines (i-RIMs), as an evolution of RIMs. i-RIMs address the memory issue of training models on large-scale data by using invertibility to train models with back-propagation with constant memory. Given the current hardware constraints and data size, this allows us to build more expressive i-RIM models. Using an i-RIM, we won the single-coil track of the first fastMRI challenge. Here, we also demonstrate the steps that lead us to win the fastMRI challenge

    Amortized inference in inverse problems

    No full text
    At the root of scientific discovery is the question of how to make sense of the world from empirical data. In practice, this question concerns how we can identify causal factors of a generative process from which we can only make noisy or limited observations. The task of finding these causal factors is called an inverse problem. We can find inverse problems in all scientific disciples ranging from physics, chemistry, and biology to medicine, psychology, and economics. Traditionally, inverse problems are tackled using methods from applied mathematics. Inherent to this tradition is the assumption of a rigorous treatment of those problems. While mathematical rigor appears elegant and desirable at first, it often comes with simplified assumptions and expensive computation. In recent years, machine learning has found applications in many computational fields due to the new dawn of artificial neural networks (ANNs). Central to this success is the idea that large sets of training data and automated non-linear feature extraction are a much more expressive approach to many problems than hand-designed features and algorithms. In this thesis Amortised inference in inverse problems, we present an approach that aims to leverage the success of machine learning, and deep learning in particular, for applications to inverse problems. The approach, which we call Recurrent Inference Machines (RIMs), is a general purpose framework for solving inverse problems. RIMs are parametric models that perform recurrent updates, modeling an iterative algorithm's structure. Throughout this work, we demonstrate that RIMs have found applications in various scientific disciplines such as medicine, astronomy, and seismology. We dedicate large parts of this thesis to applying RIMs to accelerated MRI in particular, a problem that aims to reduce measurement times in Magnetic Resonance Imaging significantly. We further propose Invertible Recurrent Inference Machines (i-RIMs), as an evolution of RIMs. i-RIMs address the memory issue of training models on large-scale data by using invertibility to train models with back-propagation with constant memory. Given the current hardware constraints and data size, this allows us to build more expressive i-RIM models. Using an i-RIM, we won the single-coil track of the first fastMRI challenge. Here, we also demonstrate the steps that lead us to win the fastMRI challenge

    Invert to Learn to Invert

    No full text

    A Bayesian model for identifying hierarchically organised states in neural population activity

    No full text
    Neural population activity in cortical circuits is not solely driven by external inputs, but is also modulated by endogenous states. These cortical states vary on multiple time-scales and also across areas and layers of the neocortex. To understand information processing in cortical circuits, we need to understand the statistical structure of internal states and their interaction with sensory inputs. Here, we present a statistical model for extracting hierarchically organized neural population states from multi-channel recordings of neural spiking activity. We model population states using a hidden Markov decision tree with state-dependent tuning parameters and a generalized linear observation model. Using variational Bayesian inference, we estimate the posterior distribution over parameters from population recordings of neural spike trains. On simulated data, we show that we can identify the underlying sequence of population states over time and reconstruct the ground truth parameters. Using extracellular population recordings from visual cortex, we find that a model with two levels of population states outperforms a generalized linear model which does not include state-dependence, as well as models which only including a binary state. Finally, modelling of state-dependence via our model also improves the accuracy with which sensory stimuli can be decoded from the population response
    corecore