1,435 research outputs found

    Power Aware Learning for Class AB Analogue VLSI Neural Network

    No full text
    Recent research into Artificial Neural Networks (ANN) has highlighted the potential of using compact analogue ANN hardware cores in embedded mobile devices, where power consumption of ANN hardware is a very significant implementation issue. This paper proposes a learning mechanism suitable for low-power class AB type analogue ANN that not only tunes the network to obtain minimum error, but also adaptively learns to reduce power consumption. Our experiments show substantial reductions in the power budget (30% to 50%) for a variety of example networks as a result of our power-aware learning

    Neuromorphic analogue VLSI

    Get PDF
    Neuromorphic systems emulate the organization and function of nervous systems. They are usually composed of analogue electronic circuits that are fabricated in the complementary metal-oxide-semiconductor (CMOS) medium using very large-scale integration (VLSI) technology. However, these neuromorphic systems are not another kind of digital computer in which abstract neural networks are simulated symbolically in terms of their mathematical behavior. Instead, they directly embody, in the physics of their CMOS circuits, analogues of the physical processes that underlie the computations of neural systems. The significance of neuromorphic systems is that they offer a method of exploring neural computation in a medium whose physical behavior is analogous to that of biological nervous systems and that operates in real time irrespective of size. The implications of this approach are both scientific and practical. The study of neuromorphic systems provides a bridge between levels of understanding. For example, it provides a link between the physical processes of neurons and their computational significance. In addition, the synthesis of neuromorphic systems transposes our knowledge of neuroscience into practical devices that can interact directly with the real world in the same way that biological nervous systems do

    Memory and information processing in neuromorphic systems

    Full text link
    A striking difference between brain-inspired neuromorphic processors and current von Neumann processors architectures is the way in which memory and processing is organized. As Information and Communication Technologies continue to address the need for increased computational power through the increase of cores within a digital processor, neuromorphic engineers and scientists can complement this need by building processor architectures where memory is distributed with the processing. In this paper we present a survey of brain-inspired processor architectures that support models of cortical networks and deep neural networks. These architectures range from serial clocked implementations of multi-neuron systems to massively parallel asynchronous ones and from purely digital systems to mixed analog/digital systems which implement more biological-like models of neurons and synapses together with a suite of adaptation and learning mechanisms analogous to the ones found in biological nervous systems. We describe the advantages of the different approaches being pursued and present the challenges that need to be addressed for building artificial neural processing systems that can display the richness of behaviors seen in biological systems.Comment: Submitted to Proceedings of IEEE, review of recently proposed neuromorphic computing platforms and system

    VLSI neural networks for computer vision

    Get PDF

    Pulse stream VLSI circuits and techniques for the implementation of neural networks

    Get PDF

    Programmable neural logic

    Get PDF
    Circuits of threshold elements (Boolean input, Boolean output neurons) have been shown to be surprisingly powerful. Useful functions such as XOR, ADD and MULTIPLY can be implemented by such circuits more efficiently than by traditional AND/OR circuits. In view of that, we have designed and built a programmable threshold element. The weights are stored on polysilicon floating gates, providing long-term retention without refresh. The weight value is increased using tunneling and decreased via hot electron injection. A weight is stored on a single transistor allowing the development of dense arrays of threshold elements. A 16-input programmable neuron was fabricated in the standard 2 Ī¼m double-poly, analog process available from MOSIS. We also designed and fabricated the multiple threshold element introduced in [5]. It presents the advantage of reducing the area of the layout from O(n^2) to O(n); (n being the number of variables) for a broad class of Boolean functions, in particular symmetric Boolean functions such as PARITY. A long term goal of this research is to incorporate programmable single/multiple threshold elements, as building blocks in field programmable gate arrays

    Hardware Learning in Analogue VLSI Neural Networks

    Get PDF
    • ā€¦
    corecore