9,009 research outputs found
Recommended from our members
Optimal neural computations require analog processors
This paper discusses some of the limitations of hardware implementations of neural networks. The authors start by presenting neural structures and their biological inspirations, while mentioning the simplifications leading to artificial neural networks. Further, the focus will be on hardware imposed constraints. They will present recent results for three different alternatives of parallel implementations of neural networks: digital circuits, threshold gate circuits, and analog circuits. The area and the delay will be related to the neurons` fan-in and to the precision of their synaptic weights. The main conclusion is that hardware-efficient solutions require analog computations, and suggests the following two alternatives: (i) cope with the limitations imposed by silicon, by speeding up the computation of the elementary silicon neurons; (2) investigate solutions which would allow the use of the third dimension (e.g. using optical interconnections)
Electronic neuroprocessors
The JPL Center for Space Microelectronics Technology (CSMT) is actively pursuing research in the neural network theory, algorithms, and electronics as well as optoelectronic neural net hardware implementations, to explore the strengths and application potential for a variety of NASA, DoD, as well as commercial application problems, where conventional computing techniques are extremely time-consuming, cumbersome, or simply non-existent. An overview of the JPL electronic neural network hardware development activities and some of the striking applications of the JPL electronic neuroprocessors are presented
Neuro-memristive Circuits for Edge Computing: A review
The volume, veracity, variability, and velocity of data produced from the
ever-increasing network of sensors connected to Internet pose challenges for
power management, scalability, and sustainability of cloud computing
infrastructure. Increasing the data processing capability of edge computing
devices at lower power requirements can reduce several overheads for cloud
computing solutions. This paper provides the review of neuromorphic
CMOS-memristive architectures that can be integrated into edge computing
devices. We discuss why the neuromorphic architectures are useful for edge
devices and show the advantages, drawbacks and open problems in the field of
neuro-memristive circuits for edge computing
Analog readout for optical reservoir computers
Reservoir computing is a new, powerful and flexible machine learning
technique that is easily implemented in hardware. Recently, by using a
time-multiplexed architecture, hardware reservoir computers have reached
performance comparable to digital implementations. Operating speeds allowing
for real time information operation have been reached using optoelectronic
systems. At present the main performance bottleneck is the readout layer which
uses slow, digital postprocessing. We have designed an analog readout suitable
for time-multiplexed optoelectronic reservoir computers, capable of working in
real time. The readout has been built and tested experimentally on a standard
benchmark task. Its performance is better than non-reservoir methods, with
ample room for further improvement. The present work thereby overcomes one of
the major limitations for the future development of hardware reservoir
computers.Comment: to appear in NIPS 201
Spiking Neural Networks for Inference and Learning: A Memristor-based Design Perspective
On metrics of density and power efficiency, neuromorphic technologies have
the potential to surpass mainstream computing technologies in tasks where
real-time functionality, adaptability, and autonomy are essential. While
algorithmic advances in neuromorphic computing are proceeding successfully, the
potential of memristors to improve neuromorphic computing have not yet born
fruit, primarily because they are often used as a drop-in replacement to
conventional memory. However, interdisciplinary approaches anchored in machine
learning theory suggest that multifactor plasticity rules matching neural and
synaptic dynamics to the device capabilities can take better advantage of
memristor dynamics and its stochasticity. Furthermore, such plasticity rules
generally show much higher performance than that of classical Spike Time
Dependent Plasticity (STDP) rules. This chapter reviews the recent development
in learning with spiking neural network models and their possible
implementation with memristor-based hardware
Artificial neural network models for digital implementation.
The last decade has witnessed the revival and a new surge in the field of artificial neural network research. This is a thoroughly interdisciplinary area, covering neurosciences, physics, mathematics, economics, and electronics. Although artificial neural networks have found diverse applications in pattern recognition, signal processing, communications, control systems, optimization, among others, this is still a research field with many open problems in the areas of theory, applications, and implementations. Compared with the development in neural network theories, hardware implementation has lagged behind. In order to take full advantages of neural networks, dedicated hardware implementations are definitely needed. Today, harnessing VLSI technology to produce efficient implementations of neural networks may be the key to the future growth and ultimate success of neural network techniques. This dissertation deals with the development of neural network models suitable for digital VLSI implementations. Since the state-of-the-art VLSI implementation technologies are basically a digital implementation medium, which offers many advantages over its analog counterpart, artificial neural networks must be adapted to an all-digital model in order to benefit from those advanced technologies. In this dissertation, new models of multilayer feedforward neural networks with single term powers-of-two weights, quantized neurons, and simplified activation functions are proposed to facilitate the hardware implementation in digital approach. Dedicated training algorithms and design procedures for these models are also developed. To demonstrate the feasibility of the presented models, performance analysis and simulation results are provided, and VHDL and FPGA designs are implemented. It has been shown that these proposed models can achieve almost the same performance as the original multilayer feedforward networks while obtaining significant improvement in digital hardware implementation in terms of silicon area and operation speed. By using the models developed in this dissertation, a digital implementation approach of multilayer feedforward neural networks becomes very attractive.Dept. of Electrical and Computer Engineering. Paper copy at Leddy Library: Theses & Major Papers - Basement, West Bldg. / Call Number: Thesis1996 .T355. Source: Dissertation Abstracts International, Volume: 59-08, Section: B, page: 4353. Adviser: H. K. Kwan. Thesis (Ph.D.)--University of Windsor (Canada), 1996
- …