3 research outputs found

    Backpropagation for Continuous Theta Neuron Networks

    Get PDF
    The Theta neuron model is a spiking neuron model which, unlike traditional Leaky-Integrate-and-Fire neurons, can model spike latencies, threshold adaptation, bistability of resting and tonic firing states, and more. Previous work on learning rules for networks of theta neurons includes the derivation of a spike-timing based backpropagation algorithm for multilayer feedforward networks. However, this learning rule is only applicable to a fixed number of spikes per neuron, and is unable to take into account the effects of synaptic dynamics. In this thesis a novel backpropagation learning rule for theta neuron networks is derived which incorporates synaptic dynamics, is applicable to changing numbers of spikes per neuron, and does not explicitly depend on spike-timing. The learning rule is successfully applied to XOR, cosine and sinc function mappings, and comparisons between other learning rules for spiking neural networks are made. The algorithm achieves 97.8 percent training performance and 96.7 percent test performance on the Fischer-Iris dataset, which is comparable to other spiking neural network learning rules. The algorithm also achieves 99.0 percent training performance and 99.14 percent test performance on the Wisconsin Breast Cancer dataset, which is better than the compared spiking neural network learning rules

    Simulation and Design of Biological and Biologically-Motivated Computing Systems

    Get PDF
    In life science, there is a great need in understandings of biological systems for therapeutics, synthetic biology, and biomedical applications. However, complex behaviors and dynamics of biological systems are hard to understand and design. In the mean time, the design of traditional computer architectures faces challenges from power consumption, device reliability, and process variations. In recent years, the convergence of computer science, computer engineering and life science has enabled new applications targeting the challenges from both engineering and biological fields. On one hand, computer modeling and simulation provides quantitative analysis and predictions of functions and behaviors of biological systems, and further facilitates the design of synthetic biological systems. On the other hand, bio-inspired devices and systems are designed for real world applications by mimicking biological functions and behaviors. This dissertation develops techniques for modeling and analyzing dynamic behaviors of biologically realistic genetic circuits and brain models and design of brain-inspired computing systems. The stability of genetic memory circuits is studied to understand its functions for its potential applications in synthetic biology. Based on the electrical-equivalent models of biochemical reactions, simulation techniques widely used for electronic systems are applied to provide quantitative analysis capabilities. In particular, system-theoretical techniques are used to study the dynamic behaviors of genetic memory circuits, where the notion of stability boundary is employed to characterize the bistability of such circuits. To facilitate the simulation-based studies of physiological and pathological behaviors in brain disorders, we construct large-scale brain models with detailed cellular mechanisms. By developing dedicated numerical techniques for brain simulation, the simulation speed is greatly improved such that dynamic simulation of large thalamocortical models with more than one million multi-compartment neurons and hundreds of synapses on commodity computer servers becomes feasible. Simulation of such large model produces biologically meaningful results demonstrating the emergence of sigma and delta waves in the early and deep stages of sleep, and suggesting the underlying cellular mechanisms that may be responsible for generation of absence seizure. Brain-inspired computing paradigms may offer promising solutions to many challenges facing the main stream Von Neumann computer architecture. To this end, we develop a biologically inspired learning system amenable to VLSI implementation. The proposed solution consists of a digitized liquid state machine (LSM) and a spike-based learning rule, providing a fully biologically inspired learning paradigm. The key design parameters of this liquid state machine are optimized to maximize the learning performance while considering hardware implementation cost. When applied to speech recognition of isolated word using TI46 speech corpus, the performance of the proposed LSM rivals several existing state-of-art techniques including the Hidden Markov Model based recognizer Sphinx-4

    Energy Efficient Spiking Neuromorphic Architectures for Pattern Recognition

    Get PDF
    There is a growing concern over reliability, power consumption, and performance of traditional Von Neumann machines, especially when dealing with complex tasks like pattern recognition. In contrast, the human brain can address such problems with great ease. Brain-inspired neuromorphic computing has attracted much research interest, as it provides an appealing architectural solution to difficult tasks due to its energy efficiency, built-in parallelism, and potential scalability. Meanwhile, the inherent error resilience in neuro-computing allows promising opportunities for leveraging approximate computing for additional energy and silicon area benefits. This thesis focuses on energy efficient neuromorphic architectures which exploit parallel processing and approximate computing for pattern recognition. Firstly, two parallel spiking neural architectures are presented. The first architecture is based on spiking neural network with global inhibition (SNNGI), which integrates digital leaky integrate-and-fire spiking neurons to mimic their biological counterparts and the corresponding on-chip learning circuits for implementing the spiking timing dependent plasticity rules. In order to achieve efficient parallelization, this work addresses a number of critical issues pertaining to memory organization, parallel processing, hardware reuse for different operating modes, as well as the tradeoffs between throughput, area, and power overheads for different configurations. For the application of handwritten digit recognition, a promising training speedup of 13.5x and a recognition speedup of 25.8x over the serial SNNGI architecture are achieved. In spite of the 120MHz operating frequency, the 32-way parallel hardware design demonstrates a 59.4x training speedup over a 2.2GHz general-purpose CPU. Besides the SNNGI, we also propose another architecture based on the liquid state machine (LSM), a recurrent spiking neural network. The LSM architecture is fully parallelized and consists of randomly connected digital neurons in a reservoir and a readout stage, the latter of which is tuned by a bio-inspired learning rule. When evaluated using the TI46 speech benchmark, the FPGA LSM system demonstrates a runtime speedup of 88x over a 2.3GHz AMD CPU. In addition, approximate computing contributes significantly to the overall energy reduction of the proposed architectures. In particular, addition computations occupy a considerable portion of power and area in the neuromorphic systems, especially in the LSM. By exploiting the built-in resilience of neuro-computing, we propose a real-time reconfigurable approximate adder for FPGA implementation to reduce the energy consumption substantially. Although there exist many mature approximate adders, these designs lose their advantages in terms of area, power, and delay on the FPGA platform. Therefore, a novel approximate adder dedicated to the FPGA is necessary. The proposed adder is based on a carry skip model which reduces carry propagation delay and power, and the resulting errors are controlled by a proposed error analysis method. Also, a real-time adjustable precision mechanism is integrated to further reduce dynamic power consumption. Implemented on the Virtex-6 FPGA, it is shown that the proposed adder consumes 18.7% and 32.6% less power than the built-in Xilinx adder in two precision modes, respectively, and that the approximate adder in both modes is 1.32x faster and requires fewer FPGA resources. Besides the adders, the firing-activity based power gating for silent neurons and booth approximate multipliers are also introduced. These three proposed schemes have been applied to our neuromorphic systems. The approximate errors incurred by these schemes have been shown to be negligible, but energy reductions of up to 20% and 30.1% over the exact training computation are achieved for the SNNGI and LSM systems, respectively
    corecore