891 research outputs found

    A Cognitive Architecture Based on a Learning Classifier System with Spiking Classifiers

    Get PDF
    © 2015, Springer Science+Business Media New York. Learning classifier systems (LCS) are population-based reinforcement learners that were originally designed to model various cognitive phenomena. This paper presents an explicitly cognitive LCS by using spiking neural networks as classifiers, providing each classifier with a measure of temporal dynamism. We employ a constructivist model of growth of both neurons and synaptic connections, which permits a genetic algorithm to automatically evolve sufficiently-complex neural structures. The spiking classifiers are coupled with a temporally-sensitive reinforcement learning algorithm, which allows the system to perform temporal state decomposition by appropriately rewarding “macro-actions”, created by chaining together multiple atomic actions. The combination of temporal reinforcement learning and neural information processing is shown to outperform benchmark neural classifier systems, and successfully solve a robotic navigation task

    Design for a Darwinian Brain: Part 1. Philosophy and Neuroscience

    Full text link
    Physical symbol systems are needed for open-ended cognition. A good way to understand physical symbol systems is by comparison of thought to chemistry. Both have systematicity, productivity and compositionality. The state of the art in cognitive architectures for open-ended cognition is critically assessed. I conclude that a cognitive architecture that evolves symbol structures in the brain is a promising candidate to explain open-ended cognition. Part 2 of the paper presents such a cognitive architecture.Comment: Darwinian Neurodynamics. Submitted as a two part paper to Living Machines 2013 Natural History Museum, Londo

    A Digital Neuromorphic Architecture Efficiently Facilitating Complex Synaptic Response Functions Applied to Liquid State Machines

    Full text link
    Information in neural networks is represented as weighted connections, or synapses, between neurons. This poses a problem as the primary computational bottleneck for neural networks is the vector-matrix multiply when inputs are multiplied by the neural network weights. Conventional processing architectures are not well suited for simulating neural networks, often requiring large amounts of energy and time. Additionally, synapses in biological neural networks are not binary connections, but exhibit a nonlinear response function as neurotransmitters are emitted and diffuse between neurons. Inspired by neuroscience principles, we present a digital neuromorphic architecture, the Spiking Temporal Processing Unit (STPU), capable of modeling arbitrary complex synaptic response functions without requiring additional hardware components. We consider the paradigm of spiking neurons with temporally coded information as opposed to non-spiking rate coded neurons used in most neural networks. In this paradigm we examine liquid state machines applied to speech recognition and show how a liquid state machine with temporal dynamics maps onto the STPU-demonstrating the flexibility and efficiency of the STPU for instantiating neural algorithms.Comment: 8 pages, 4 Figures, Preprint of 2017 IJCN

    A dynamic reconfigurable architecture for hybrid spiking and convolutional FPGA-based neural network designs

    Get PDF
    This work presents a dynamically reconfigurable architecture for Neural Network (NN) accelerators implemented in Field-Programmable Gate Array (FPGA) that can be applied in a variety of application scenarios. Although the concept of Dynamic Partial Reconfiguration (DPR) is increasingly used in NN accelerators, the throughput is usually lower than pure static designs. This work presents a dynamically reconfigurable energy-efficient accelerator architecture that does not sacrifice throughput performance. The proposed accelerator comprises reconfigurable processing engines and dynamically utilizes the device resources according to model parameters. Using the proposed architecture with DPR, different NN types and architectures can be realized on the same FPGA. Moreover, the proposed architecture maximizes throughput performance with design optimizations while considering the available resources on the hardware platform. We evaluate our design with different NN architectures for two different tasks. The first task is the image classification of two distinct datasets, and this requires switching between Convolutional Neural Network (CNN) architectures having different layer structures. The second task requires switching between NN architectures, namely a CNN architecture with high accuracy and throughput and a hybrid architecture that combines convolutional layers and an optimized Spiking Neural Network (SNN) architecture. We demonstrate throughput results from quickly reprogramming only a tiny part of the FPGA hardware using DPR. Experimental results show that the implemented designs achieve a 7Ă— faster frame rate than current FPGA accelerators while being extremely flexible and using comparable resources

    Brain Disease Detection From EEGS: Comparing Spiking and Recurrent Neural Networks for Non-stationary Time Series Classification

    Get PDF
    Modeling non-stationary time series data is a difficult problem area in AI, due to the fact that the statistical properties of the data change as the time series progresses. This complicates the classification of non-stationary time series, which is a method used in the detection of brain diseases from EEGs. Various techniques have been developed in the field of deep learning for tackling this problem, with recurrent neural networks (RNN) approaches utilising Long short-term memory (LSTM) architectures achieving a high degree of success. This study implements a new, spiking neural network-based approach to time series classification for the purpose of detecting three brain diseases from EEG datasets - epilepsy, alcoholism, and schizophrenia. The performance and training time of the spiking neural network classifier is compared to those of both a baseline RNN-LSTM EEG classifier and the current state-of-the art RNN-LSTM EEG classifier architecture from the relevant literature. The SNN EEG classifier model developed in this study outperforms both the baseline and state of-the-art RNN models in terms of accuracy, and is able to detect all three brain diseases with an accuracy of 100%, while requiring a far smaller number of training data samples than recurrent neural network approaches. This represents the best performance present in the literature for the task of EEG classificatio
    • …
    corecore