124 research outputs found

    Adaptive Synaptic Failure Enables Sampling from Posterior Predictive Distributions in the Brain

    Full text link
    Bayesian interpretations of neural processing require that biological mechanisms represent and operate upon probability distributions in accordance with Bayes' theorem. Many have speculated that synaptic failure constitutes a mechanism of variational, i.e., approximate, Bayesian inference in the brain. Whereas models have previously used synaptic failure to sample over uncertainty in model parameters, we demonstrate that by adapting transmission probabilities to learned network weights, synaptic failure can sample not only over model uncertainty, but complete posterior predictive distributions as well. Our results potentially explain the brain's ability to perform probabilistic searches and to approximate complex integrals. These operations are involved in numerous calculations, including likelihood evaluation and state value estimation for complex planning.Comment: 23 pages, 5 figures. arXiv admin note: text overlap with arXiv:2111.0978

    Sensorimotor Learning Biases Choice Behavior: A Learning Neural Field Model for Decision Making

    Get PDF
    According to a prominent view of sensorimotor processing in primates, selection and specification of possible actions are not sequential operations. Rather, a decision for an action emerges from competition between different movement plans, which are specified and selected in parallel. For action choices which are based on ambiguous sensory input, the frontoparietal sensorimotor areas are considered part of the common underlying neural substrate for selection and specification of action. These areas have been shown capable of encoding alternative spatial motor goals in parallel during movement planning, and show signatures of competitive value-based selection among these goals. Since the same network is also involved in learning sensorimotor associations, competitive action selection (decision making) should not only be driven by the sensory evidence and expected reward in favor of either action, but also by the subject's learning history of different sensorimotor associations. Previous computational models of competitive neural decision making used predefined associations between sensory input and corresponding motor output. Such hard-wiring does not allow modeling of how decisions are influenced by sensorimotor learning or by changing reward contingencies. We present a dynamic neural field model which learns arbitrary sensorimotor associations with a reward-driven Hebbian learning algorithm. We show that the model accurately simulates the dynamics of action selection with different reward contingencies, as observed in monkey cortical recordings, and that it correctly predicted the pattern of choice errors in a control experiment. With our adaptive model we demonstrate how network plasticity, which is required for association learning and adaptation to new reward contingencies, can influence choice behavior. The field model provides an integrated and dynamic account for the operations of sensorimotor integration, working memory and action selection required for decision making in ambiguous choice situations

    VLSI analogs of neuronal visual processing: a synthesis of form and function

    Get PDF
    This thesis describes the development and testing of a simple visual system fabricated using complementary metal-oxide-semiconductor (CMOS) very large scale integration (VLSI) technology. This visual system is composed of three subsystems. A silicon retina, fabricated on a single chip, transduces light and performs signal processing in a manner similar to a simple vertebrate retina. A stereocorrespondence chip uses bilateral retinal input to estimate the location of objects in depth. A silicon optic nerve allows communication between chips by a method that preserves the idiom of action potential transmission in the nervous system. Each of these subsystems illuminates various aspects of the relationship between VLSI analogs and their neurobiological counterparts. The overall synthetic visual system demonstrates that analog VLSI can capture a significant portion of the function of neural structures at a systems level, and concomitantly, that incorporating neural architectures leads to new engineering approaches to computation in VLSI. The relationship between neural systems and VLSI is rooted in the shared limitations imposed by computing in similar physical media. The systems discussed in this text support the belief that the physical limitations imposed by the computational medium significantly affect the evolving algorithm. Since circuits are essentially physical structures, I advocate the use of analog VLSI as powerful medium of abstraction, suitable for understanding and expressing the function of real neural systems. The working chip elevates the circuit description to a kind of synthetic formalism. The behaving physical circuit provides a formal test of theories of function that can be expressed in the language of circuits

    Real-time classification of multivariate olfaction data using spiking neural networks

    Get PDF
    Recent studies in bioinspired artificial olfaction, especially those detailing the application of spike-based neuromorphic methods, have led to promising developments towards overcoming the limitations of traditional approaches, such as complexity in handling multivariate data, computational and power requirements, poor accuracy, and substantial delay for processing and classification of odors. Rank-order-based olfactory systems provide an interesting approach for detection of target gases by encoding multi-variate data generated by artificial olfactory systems into temporal signatures. However, the utilization of traditional pattern-matching methods and unpredictable shuffling of spikes in the rank-order impedes the performance of the system. In this paper, we present an SNN-based solution for the classification of rank-order spiking patterns to provide continuous recognition results in real-time. The SNN classifier is deployed on a neuromorphic hardware system that enables massively parallel and low-power processing on incoming rank-order patterns. Offline learning is used to store the reference rank-order patterns, and an inbuilt nearest neighbor classification logic is applied by the neurons to provide recognition results. The proposed system was evaluated using two different datasets including rank-order spiking data from previously established olfactory systems. The continuous classification that was achieved required a maximum of 12.82% of the total pattern frame to provide 96.5% accuracy in identifying corresponding target gases. Recognition results were obtained at a nominal processing latency of 16ms for each incoming spike. In addition to the clear advantages in terms of real-time operation and robustness to inconsistent rank-orders, the SNN classifier can also detect anomalies in rank-order patterns arising due to drift in sensing arrays

    Energy Efficient Neocortex-Inspired Systems with On-Device Learning

    Get PDF
    Shifting the compute workloads from cloud toward edge devices can significantly improve the overall latency for inference and learning. On the contrary this paradigm shift exacerbates the resource constraints on the edge devices. Neuromorphic computing architectures, inspired by the neural processes, are natural substrates for edge devices. They offer co-located memory, in-situ training, energy efficiency, high memory density, and compute capacity in a small form factor. Owing to these features, in the recent past, there has been a rapid proliferation of hybrid CMOS/Memristor neuromorphic computing systems. However, most of these systems offer limited plasticity, target either spatial or temporal input streams, and are not demonstrated on large scale heterogeneous tasks. There is a critical knowledge gap in designing scalable neuromorphic systems that can support hybrid plasticity for spatio-temporal input streams on edge devices. This research proposes Pyragrid, a low latency and energy efficient neuromorphic computing system for processing spatio-temporal information natively on the edge. Pyragrid is a full-scale custom hybrid CMOS/Memristor architecture with analog computational modules and an underlying digital communication scheme. Pyragrid is designed for hierarchical temporal memory, a biomimetic sequence memory algorithm inspired by the neocortex. It features a novel synthetic synapses representation that enables dynamic synaptic pathways with reduced memory usage and interconnects. The dynamic growth in the synaptic pathways is emulated in the memristor device physical behavior, while the synaptic modulation is enabled through a custom training scheme optimized for area and power. Pyragrid features data reuse, in-memory computing, and event-driven sparse local computing to reduce data movement by ~44x and maximize system throughput and power efficiency by ~3x and ~161x over custom CMOS digital design. The innate sparsity in Pyragrid results in overall robustness to noise and device failure, particularly when processing visual input and predicting time series sequences. Porting the proposed system on edge devices can enhance their computational capability, response time, and battery life

    Robust learning algorithms for spiking and rate-based neural networks

    Get PDF
    Inspired by the remarkable properties of the human brain, the fields of machine learning, computational neuroscience and neuromorphic engineering have achieved significant synergistic progress in the last decade. Powerful neural network models rooted in machine learning have been proposed as models for neuroscience and for applications in neuromorphic engineering. However, the aspect of robustness is often neglected in these models. Both biological and engineered substrates show diverse imperfections that deteriorate the performance of computation models or even prohibit their implementation. This thesis describes three projects aiming at implementing robust learning with local plasticity rules in neural networks. First, we demonstrate the advantages of neuromorphic computations in a pilot study on a prototype chip. Thereby, we quantify the speed and energy consumption of the system compared to a software simulation and show how on-chip learning contributes to the robustness of learning. Second, we present an implementation of spike-based Bayesian inference on accelerated neuromorphic hardware. The model copes, via learning, with the disruptive effects of the imperfect substrate and benefits from the acceleration. Finally, we present a robust model of deep reinforcement learning using local learning rules. It shows how backpropagation combined with neuromodulation could be implemented in a biologically plausible framework. The results contribute to the pursuit of robust and powerful learning networks for biological and neuromorphic substrates

    Biologically Plausible Cortical Hierarchical-Classifier Circuit Extensions in Spiking Neurons

    Get PDF
    Hierarchical categorization inter-leaved with sequence recognition of incoming stimuli in the mammalian brain is theorized to be performed by circuits composed of the thalamus and the six-layer cortex. Using these circuits, the cortex is thought to learn a ‘brain grammar’ composed of recursive sequences of categories. A thalamo-cortical, hierarchical classification and sequence learning “Core” circuit implemented as a linear matrix simulation and was published by Rodriguez, Whitson & Granger in 2004. In the brain, these functions are implemented by cortical and thalamic circuits composed of recurrently-connected, spiking neurons. The Neural Engineering Framework (NEF) (Eliasmith & Anderson, 2003) allows for the construction of large-scale biologically plausible neural networks. Existing NEF models of the basal-ganglia and the thalamus exist but to the best of our knowledge there does not exist an integrated, spiking-neuron, cortical-thalamic-Core network model. We construct a more biologically-plausible version of the hierarchical-classification function of the Core circuit using leaky-integrate-and-fire neurons which performs progressive visual classification of static image sequences relying on the neural activity levels to trigger the progressive classification of the stimulus. We proceed by implementing a recurrent NEF model of the cortical-thalamic Core circuit and then test the resulting model on the hierarchical categorization of images
    corecore