1,420 research outputs found

    Algorithm Hardware Codesign for High Performance Neuromorphic Computing

    Get PDF
    Driven by the massive application of Internet of Things (IoT), embedded system and Cyber Physical System (CPS) etc., there is an increasing demand to apply machine intelligence on these power limited scenarios. Though deep learning has achieved impressive performance on various realistic and practical tasks such as anomaly detection, pattern recognition, machine vision etc., the ever-increasing computational complexity and model size of Deep Neural Networks (DNN) make it challenging to deploy them onto aforementioned scenarios where computation, memory and energy resource are all limited. Early studies show that biological systems\u27 energy efficiency can be orders of magnitude higher than that of digital systems. Hence taking inspiration from biological systems, neuromorphic computing and Spiking Neural Network (SNN) have drawn attention as alternative solutions for energy-efficient machine intelligence. Though believed promising, neuromorphic computing are hardly used for real world applications. A major problem is that the performance of SNN is limited compared with DNNs due to the lack of efficient training algorithm. In SNN, neuron\u27s output is spike, which is represented by Dirac Delta function mathematically. Becauase of the non-differentiable nature of spike, gradient descent cannot be directly used to train SNN. Hence algorithm level innovation is desirable. Next, as an emerging computing paradigm, hardware and architecture level innovation is also required to support new algorithms and to explore the potential of neuromorphic computing. In this work, we present a comprehensive algorithm-hardware codesign for neuromorphic computing. On the algorithm side, we address the training difficulty. We first derive a flexible SNN model that retains critical neural dynamics, and then develop algorithm to train SNN to learn temporal patterns. Next, we apply proposed algorithm to multivariate time series classification tasks to demonstrate its advantages. On hardware level, we develop a systematic solution on FPGA that is optimized for proposed SNN model to enable high performance inference. In addition, we also explore emerging devices, a memristor-based neuromorphic design is proposed. We carry out a neuron and synapse circuit which can replicate the important neural dynamics such as filter effect and adaptive threshold

    A Compact Digital Gamma-tone Filter Processor

    Get PDF
    Area consumption is one of the most important design constrains in the development of compact digital systems. Several authors have proposed making compact Cochlear Implant processors using Gamma-tone filter banks. These model aspects of the cochlea spectral filtering. A good area-efficient design of the Gamma-tone Filter Bank could reduce the amount of circuitry allowing patients to wear these cochlear implants more easily. In consequence, many authors have reduced the area by using the minimum number of registers when implementing this type of filter. However, critical paths limit their performance. Here a compact Gamma-tone Filter processor, formulated using the impulse invariant transformation together with a normalization method, is presented. The normalization method in the model guarantees the same precision for any filter order. In addition, area resources are kept low due to the implementation of a single Second Order Section (SOS) IIR stage for processing several SOS IIR stages and several channels at different times. Results show that the combination of the properties of the model and the implementation techniques generate a processor with high processing speed, expending less resources than reported in the literature.Collaboration with Sanchez-Rivera, related to, but not funded by, EPSRC grant EP/G062609/

    Approximation and Optimization of an Auditory Model for Realization in VLSI Hardware

    Get PDF
    The Auditory Image Model (AIM) is a software tool set developed to functionally model the role of the ear in the human hearing process. AIM includes detailed filter equations for the major functional portions of the ear. Currently, AIM is run on a workstation and requires 10 to 100 times real-time to process audio information and produce an auditory image. An all-digital approximation of the AIM which is suitable for implementation in very large scale integrated circuits is presented. This document details the mathematical models of AIM and the approximations and optimizations used to simplify the filtering and signal processing accomplished by AIM. Included are the details of an efficient multi-rate architecture designed for sub-micron VLSI technology to carry out the approximated equations. Finally, simulation results which indicate that the architecture, when implemented in 0.8µm CMOS VLSI, will sustain real- time operation on a 32 channel system are included. The same tests also indicate that the chip will be approximately 3.3 mm2, and consume approximately 18 mW. The details of a new and efficient method for computing an approximate logarithm (base two) on binary integers is also presented. The approximate logarithm algorithm is used to convert sound energy into millibels quickly and with low power. Additionally, the algorithm, is easily extended to compute an approximate logarithm in base ten which broadens the class of problems to which it may be applied

    Brian Hears: Online Auditory Processing Using Vectorization Over Channels

    Get PDF
    The human cochlea includes about 3000 inner hair cells which filter sounds at frequencies between 20 Hz and 20 kHz. This massively parallel frequency analysis is reflected in models of auditory processing, which are often based on banks of filters. However, existing implementations do not exploit this parallelism. Here we propose algorithms to simulate these models by vectorizing computation over frequency channels, which are implemented in “Brian Hears,” a library for the spiking neural network simulator package “Brian.” This approach allows us to use high-level programming languages such as Python, because with vectorized operations, the computational cost of interpretation represents a small fraction of the total cost. This makes it possible to define and simulate complex models in a simple way, while all previous implementations were model-specific. In addition, we show that these algorithms can be naturally parallelized using graphics processing units, yielding substantial speed improvements. We demonstrate these algorithms with several state-of-the-art cochlear models, and show that they compare favorably with existing, less flexible, implementations

    Identification of Linear / Nonlinear Systems via the Coyote Optimization Algorithm (COA)

    Get PDF
    Classical techniques used in system identification, like the basic least mean square method (LMS) and its other forms; suffer from instability problems and convergence to a locally optimal solution instead of a global solution. These problems can be reduced by applying optimization techniques inspired by nature. This paper applies the Coyote optimization algorithm (COA) to identify linear or nonlinear systems. In the case of linear systems identification, the infinite impulse response (IIR) filter is used to constitute the plants. In this work, COA algorithm is applied to identify different plants, and its performance is investigated and compared to that based on particle swarm optimization algorithm (PSOA), which is considered as one of the simplest and most popular optimization algorithms. The performance is investigated for different cases including same order and reduced-order filter models. The acquired results illustrate the ability of the COA algorithm to obtain the lowest error between the proposed IIR filter and the actual system in most cases. Also, a statistical analysis is performed for the two algorithms. Also, the COA is used to optimize the identification process of nonlinear systems based on Hammerstein models. For this purpose, COA is used to determine the parameters of the Hammerstein models of two different examples, which were identified in the literature using other algorithms. For more investigation, the fulfillment of the COA is compared to that of some other competitive heuristic algorithms. Most of the results prove the effectiveness of COA in system identification problems

    FROM CARDIAC OPTICAL IMAGING DATA TO BODY SURFACE ECG: A THREE DIMENSIONAL VENTRICLE MODEL

    Get PDF
    Understanding the mechanisms behind unexplained abnormal heart rhythms is important for diagnosis and prevention of arrhythmias. Many studies have investigated the mechanisms at organ, tissue, cellular and molecular levels. There is considerable information available from tissue level experiments that investigate local action potential properties and from optical imaging to observe activity propagation properties at an organ level. By combining those electrophysiological properties together, in the present study we developed a simulation model that can help in estimation of the resulting body surface potentials from a specific electrical activity pattern within the myocardium. Some of the potential uses of our model include: 1) providing visualization of an entire electrophysiological event, i.e. surface potentials and associated source which would be optical imaging data, 2) estimation of QT intervals resulting from local action potential property changes, 3) aiding in improving defibrillation therapy by determining optimal timing and location of shocks

    Speech enhancement using auditory filterbank.

    Get PDF
    This thesis presents a novel subband noise reduction technique for speech enhancement, termed as Adaptive Subband Wiener Filtering (ASWF), based on a critical-band gammatone filterbank. The ASWF is derived from a generalized Subband Wiener Filtering (SWF) equation and reduces noises according to the estimated signal-to-noise ratio (SNR) in each auditory channel and in each time frame. The design of a subband noise estimator, suitable for some real-life noise environments, is also presented. This denoising technique would be beneficial for some auditory-based speech and audio applications, e.g. to enhance the robustness of sound processing in cochlear implants. Comprehensive objective and subjective tests demonstrated the proposed technique is effective to improve the perceptual quality of enhanced speeches. This technique offers a time-domain noise reduction scheme using a linear filterbank structure and can be combined with other filterbank algorithms (such as for speech recognition and coding) as a front-end processing step immediately after the analysis filterbank, to increase the robustness of the respective application.Dept. of Electrical and Computer Engineering. Paper copy at Leddy Library: Theses & Major Papers - Basement, West Bldg. / Call Number: Thesis2005 .G85. Source: Masters Abstracts International, Volume: 44-03, page: 1452. Thesis (M.A.Sc.)--University of Windsor (Canada), 2005
    • …
    corecore