4 research outputs found

    Neural Plausibility of Bayesian Inference

    Get PDF
    Behavioral studies have shown that humans account for uncertainty in a way that is nearly optimal in the Bayesian sense. Probabilistic models based on Bayes' theorem have been widely used for understanding human cognition, and have been applied to behaviors that range from perception and motor control to higher level reasoning and inference. However, whether the brain actually uses Bayesian reasoning or such reasoning is just an approximate description of human behavior is an open question. In this thesis, I aim to address this question by exploring the neural plausibility of Bayesian inference. I first present a spiking neural model for learning priors (beliefs) from experiences of the natural world. Through this model, I address the question of how humans might be learning the priors needed for the inferences they make in their daily lives. I propose neural mechanisms for continuous learning and updating of priors - cognitive processes that are critical for many aspects of higher-level cognition. Next, I propose neural mechanisms for performing Bayesian inference by combining the learned prior with the likelihood that is based on the observed information. Through the process of building these models, I address the issue of representing probability distributions in neural populations by deploying an efficient neural coding scheme. I show how these representations can be used in meaningful ways to learn beliefs (priors) over time and to perform inference using those beliefs. The final model is generalizable to various psychological tasks, and I show that it converges to the near optimal priors with very few training examples. The model is validated using a life span inference task, and the results from the model match human performance on this task better than an ideal Bayesian model due to the use of neuron tuning curves. This provides an initial step towards better understanding how Bayesian computations may be implemented in a biologically plausible neural network. Finally, I discuss the limitations and suggest future work on both theoretical and experimental fronts

    Dynamical Systems in Spiking Neuromorphic Hardware

    Get PDF
    Dynamical systems are universal computers. They can perceive stimuli, remember, learn from feedback, plan sequences of actions, and coordinate complex behavioural responses. The Neural Engineering Framework (NEF) provides a general recipe to formulate models of such systems as coupled sets of nonlinear differential equations and compile them onto recurrently connected spiking neural networks – akin to a programming language for spiking models of computation. The Nengo software ecosystem supports the NEF and compiles such models onto neuromorphic hardware. In this thesis, we analyze the theory driving the success of the NEF, and expose several core principles underpinning its correctness, scalability, completeness, robustness, and extensibility. We also derive novel theoretical extensions to the framework that enable it to far more effectively leverage a wide variety of dynamics in digital hardware, and to exploit the device-level physics in analog hardware. At the same time, we propose a novel set of spiking algorithms that recruit an optimal nonlinear encoding of time, which we call the Delay Network (DN). Backpropagation across stacked layers of DNs dramatically outperforms stacked Long Short-Term Memory (LSTM) networks—a state-of-the-art deep recurrent architecture—in accuracy and training time, on a continuous-time memory task, and a chaotic time-series prediction benchmark. The basic component of this network is shown to function on state-of-the-art spiking neuromorphic hardware including Braindrop and Loihi. This implementation approaches the energy-efficiency of the human brain in the former case, and the precision of conventional computation in the latter case

    From sequences to cognitive structures : neurocomputational mechanisms

    Get PDF
    Ph. D. Thesis.Understanding how the brain forms representations of structured information distributed in time is a challenging neuroscientific endeavour, necessitating computationally and neurobiologically informed study. Human neuroimaging evidence demonstrates engagement of a fronto-temporal network, including ventrolateral prefrontal cortex (vlPFC), during language comprehension. Corresponding regions are engaged when processing dependencies between word-like items in Artificial Grammar (AG) paradigms. However, the neurocomputations supporting dependency processing and sequential structure-building are poorly understood. This work aimed to clarify these processes in humans, integrating behavioural, electrophysiological and computational evidence. I devised a novel auditory AG task to assess simultaneous learning of dependencies between adjacent and non-adjacent items, incorporating learning aids including prosody, feedback, delineated sequence boundaries, staged pre-exposure, and variable intervening items. Behavioural data obtained in 50 healthy adults revealed strongly bimodal performance despite these cues. Notably, however, reaction times revealed sensitivity to the grammar even in low performers. Behavioural and intracranial electrode data was subsequently obtained in 12 neurosurgical patients performing this task. Despite chance behavioural performance, time- and time-frequency domain electrophysiological analysis revealed selective responsiveness to sequence grammaticality in regions including vlPFC. I developed a novel neurocomputational model (VS-BIND: “Vector-symbolic Sequencing of Binding INstantiating Dependencies”), triangulating evidence to clarify putative mechanisms in the fronto-temporal language network. I then undertook multivariate analyses on the AG task neural data, revealing responses compatible with the presence of ordinal codes in vlPFC, consistent with VS-BIND. I also developed a novel method of causal analysis on multivariate patterns, representational Granger causality, capable of detecting flow of distinct representations within the brain. This alluded to top-down transmission of syntactic predictions during the AG task, from vlPFC to auditory cortex, largely in the opposite direction to stimulus encodings, consistent with predictive coding accounts. It finally suggested roles for the temporoparietal junction and frontal operculum during grammaticality processing, congruent with prior literature. This work provides novel insights into the neurocomputational basis of cognitive structure-building, generating hypotheses for future study, and potentially contributing to AI and translational efforts.Wellcome Trust, European Research Counci
    corecore