335 research outputs found

    VLSI neural networks for computer vision

    Get PDF

    Liquid State Machine with Dendritically Enhanced Readout for Low-power, Neuromorphic VLSI Implementations

    Full text link
    In this paper, we describe a new neuro-inspired, hardware-friendly readout stage for the liquid state machine (LSM), a popular model for reservoir computing. Compared to the parallel perceptron architecture trained by the p-delta algorithm, which is the state of the art in terms of performance of readout stages, our readout architecture and learning algorithm can attain better performance with significantly less synaptic resources making it attractive for VLSI implementation. Inspired by the nonlinear properties of dendrites in biological neurons, our readout stage incorporates neurons having multiple dendrites with a lumped nonlinearity. The number of synaptic connections on each branch is significantly lower than the total number of connections from the liquid neurons and the learning algorithm tries to find the best 'combination' of input connections on each branch to reduce the error. Hence, the learning involves network rewiring (NRW) of the readout network similar to structural plasticity observed in its biological counterparts. We show that compared to a single perceptron using analog weights, this architecture for the readout can attain, even by using the same number of binary valued synapses, up to 3.3 times less error for a two-class spike train classification problem and 2.4 times less error for an input rate approximation task. Even with 60 times larger synapses, a group of 60 parallel perceptrons cannot attain the performance of the proposed dendritically enhanced readout. An additional advantage of this method for hardware implementations is that the 'choice' of connectivity can be easily implemented exploiting address event representation (AER) protocols commonly used in current neuromorphic systems where the connection matrix is stored in memory. Also, due to the use of binary synapses, our proposed method is more robust against statistical variations.Comment: 14 pages, 19 figures, Journa

    Efficient audio signal processing for embedded systems

    Get PDF
    We investigated two design strategies that would allow us to efficiently process audio signals on embedded systems such as mobile phones and portable electronics. In the first strategy, we exploit properties of the human auditory system to process audio signals. We designed a sound enhancement algorithm to make piezoelectric loudspeakers sound "richer" and "fuller," using a combination of bass extension and dynamic range compression. We also developed an audio energy reduction algorithm for loudspeaker power management by suppressing signal energy below the masking threshold. In the second strategy, we use low-power analog circuits to process the signal before digitizing it. We designed an analog front-end for sound detection and implemented it on a field programmable analog array (FPAA). The sound classifier front-end can be used in a wide range of applications because programmable floating-gate transistors are employed to store classifier weights. Moreover, we incorporated a feature selection algorithm to simplify the analog front-end. A machine learning algorithm AdaBoost is used to select the most relevant features for a particular sound detection application. We also designed the circuits to implement the AdaBoost-based analog classifier.PhDCommittee Chair: Anderson, David; Committee Member: Hasler, Jennifer; Committee Member: Hunt, William; Committee Member: Lanterman, Aaron; Committee Member: Minch, Bradle

    Implementing radial basis function neural networks in pulsed analogue VLSI

    Get PDF

    Design of Building Blocks for Trit Algorithm

    Get PDF
    This thesis attempts to design the building blocks for TRIT algorithm. PSPICE was used for simulation. The building blocks were laidout in Magic.Electrical Engineerin

    Configurable Low Power Analog Multilayer Perceptron

    Get PDF
    A configurable, low power analog implementation of a multilayer perceptron (MLP) is presented in this work. It features a highly programmable system that allows the user to create a MLP neural network design of their choosing. In addition to the configurability, this neural network provides the ability of low power operation via analog circuitry in its neurons. The main MLP system is made up of 12 neurons that can be configurable to any number of layers and neurons per layer until all available resources are utilized. The MLP network is fabricated in a standard 0.13 μm CMOS process occupying approximately 1 mm2 of on-chip area. The MLP system is analyzed at several different configurations with all achieving a greater than 1 Tera-operations per second per Watt figure of merit. This work offers a high speed, low power, and scalable alternative to digital configurable neural networks

    Hardware neural systems for applications: a pulsed analog approach

    Get PDF

    Neural networks : analog VLSI implementation and learning algorithms

    Get PDF
    • …
    corecore