10 research outputs found
MDAC synapse for analog neural networks
Journal ArticleEfficient weight storage and multiplication are important design challenges which must be addressed in analog neural network implementations. Many schemes which treat storage and multiplication separately have been previously reported for implementation of synapses. We present a novel synapse circuit that integrates the weight storage and multiplication into a single, compact multiplying digital-to-analog converter (MDAC) circuit. The circuit has a small layout area (5400 μm2 in a 1.5-μm process) and exhibits good linearity over its entire input range. We have fabricated several synapses and characterized their presponses. Average maximum INL and DNL values of 0.2 LSB and 0.4 LSB, respectively, have been measured. We also report on the performance of an analog recurrent neural network which uses these new synapses
Proposta de implementação em hardware dedicado de redes neurais competitivas com técnicas de circuitos integrados analógicos
Neste trabalho apresenta-se uma proposta de uma técnica para implementação em hardware, das estruturas básicas de uma Rede Neural Competitiva, baseada em técnicas analógicas.
Através desta proposta, será abordada uma das classes mais interessantes de Redes Neurais Artificiais (RNA) que são as Redes Neurais Competitivas (RNC), que possuem forte inspiração biológica. As equações fundamentais que descrevem o comportamento da RNC foram derivadas de estudos interdisciplinares, a maioria envolvendo observações neurofisiológicas. O estudo do neurônio biológico, por exemplo, nos leva à clássica equação da membrana.
A técnica mostrada para a implementação das Redes Neurais Competitivas se baseia no uso das técnicas analógicas. Estas conduzem a um projeto mais compacto além de permitirem um processamento em tempo real, visto que o circuito computacional analógico altera simultaneamente e continuamente todos os estados dos neurônios que se encontram interligados em paralelo.
Para esta proposta de implementação, é mostrado que as equações fundamentais que governam as Redes Neurais Competitivas possuem uma relação com componentes eletrônicos básicos, podendo então, serem implementados através destes simples componentes com os quais as equações fundamentais se relacionam.
Para tanto, é mostrado por meio de simulações em software, o comportamento das equações fundamentais deste tipo de Redes Neurais, e então, é comparado este comportamento, com os obtidos através de simulações elétricas dos circuitos equivalentes oriundos destas equações fundamentais. Mostra-se também, em ambas as simulações, uma das características mais importantes existentes nos modelos de RNC, conhecida como Memória de Tempo Curto (STM).
Por fim, é apresentada uma aplicação típica na área de clusterização de padrões utilizando pesos sinápticos, a fim de, demonstrar a implementação utilizando as técnicas descritas durante o trabalho. Esta aplicação é demonstrada através de uma simulação elétrica, sendo esta realizada por meio do simulador HSPICE. Tal aplicação demonstra o correto desempenho da proposta deste trabalho.Sistemas InteligentesRed de Universidades con Carreras en Informática (RedUNCI
Multiplexed gradient descent: Fast online training of modern datasets on hardware neural networks without backpropagation
We present multiplexed gradient descent (MGD), a gradient descent framework
designed to easily train analog or digital neural networks in hardware. MGD
utilizes zero-order optimization techniques for online training of hardware
neural networks. We demonstrate its ability to train neural networks on modern
machine learning datasets, including CIFAR-10 and Fashion-MNIST, and compare
its performance to backpropagation. Assuming realistic timescales and hardware
parameters, our results indicate that these optimization techniques can train a
network on emerging hardware platforms orders of magnitude faster than the
wall-clock time of training via backpropagation on a standard GPU, even in the
presence of imperfect weight updates or device-to-device variations in the
hardware. We additionally describe how it can be applied to existing hardware
as part of chip-in-the-loop training, or integrated directly at the hardware
level. Crucially, the MGD framework is highly flexible, and its gradient
descent process can be optimized to compensate for specific hardware
limitations such as slow parameter-update speeds or limited input bandwidth
An Investigation into Neuromorphic ICs using Memristor-CMOS Hybrid Circuits
The memristance of a memristor depends on the amount of charge flowing
through it and when current stops flowing through it, it remembers the state.
Thus, memristors are extremely suited for implementation of memory units.
Memristors find great application in neuromorphic circuits as it is possible to
couple memory and processing, compared to traditional Von-Neumann digital
architectures where memory and processing are separate. Neural networks have a
layered structure where information passes from one layer to another and each
of these layers have the possibility of a high degree of parallelism.
CMOS-Memristor based neural network accelerators provide a method of speeding
up neural networks by making use of this parallelism and analog computation. In
this project we have conducted an initial investigation into the current state
of the art implementation of memristor based programming circuits. Various
memristor programming circuits and basic neuromorphic circuits have been
simulated. The next phase of our project revolved around designing basic
building blocks which can be used to design neural networks. A memristor bridge
based synaptic weighting block, a operational transconductor based summing
block were initially designed. We then designed activation function blocks
which are used to introduce controlled non-linearity. Blocks for a basic
rectified linear unit and a novel implementation for tan-hyperbolic function
have been proposed. An artificial neural network has been designed using these
blocks to validate and test their performance. We have also used these
fundamental blocks to design basic layers of Convolutional Neural Networks.
Convolutional Neural Networks are heavily used in image processing
applications. The core convolutional block has been designed and it has been
used as an image processing kernel to test its performance.Comment: Bachelor's thesi
Analogue neuromorphic systems.
This thesis addresses a new area of science and technology, that of neuromorphic
systems, namely the problems and prospects of analogue neuromorphic systems. The
subject is subdivided into three chapters.
Chapter 1 is an introduction. It formulates the oncoming problem of the creation
of highly computationally costly systems of nonlinear information processing (such as
artificial neural networks and artificial intelligence systems). It shows that an analogue
technology could make a vital contribution to the creation such systems. The basic principles
of creation of analogue neuromorphic systems are formulated. The importance
will be emphasised of the principle of orthogonality for future highly efficient complex
information processing systems.
Chapter 2 reviews the basics of neural and neuromorphic systems and informs on
the present situation in this field of research, including both experimental and theoretical
knowledge gained up-to-date. The chapter provides the necessary background for
correct interpretation of the results reported in Chapter 3 and for a realistic decision on
the direction for future work.
Chapter 3 describes my own experimental and computational results within the
framework of the subject, obtained at De Montfort University. These include: the
building of (i) Analogue Polynomial Approximator/lnterpolatoriExtrapolator, (ii) Synthesiser
of orthogonal functions, (iii) analogue real-time video filter (performing the
homomorphic filtration), (iv) Adaptive polynomial compensator of geometrical distortions
of CRT- monitors, (v) analogue parallel-learning neural network (backpropagation
algorithm).
Thus, this thesis makes a dual contribution to the chosen field: it summarises the
present knowledge on the possibility of utilising analogue technology in up-to-date and
future computational systems, and it reports new results within the framework of the
subject. The main conclusion is that due to its promising power characteristics, small
sizes and high tolerance to degradation, the analogue neuromorphic systems will playa
more and more important role in future computational systems (in particular in systems
of artificial intelligence)
Recommended from our members
Enabling high-performance, mixed-signal approximate computing
textFor decades, the semiconductor industry enjoyed exponential improvements in microprocessor power and performance with the device scaling of successive technology generations. Scaling limitations at sub-micron technologies, however, have ceased to provide these historical performance improvements within a limited power budget. While device scaling provides a larger number of transistors per chip, for the same chip area, a growing percentage of the chip will have to be powered off at any given time due to power constraints. As such, the architecture community has focused on energy-efficient designs and is looking to specialized hardware to provide gains in performance. A focus on energy efficiency, along with increasingly less reliable transistors due to device scaling, has led to research in the area of approximate computing, where accuracy is traded for energy efficiency when precise computation is not required. There is a growing body of approximation-tolerant applications that, for example, compute on noisy or incomplete data, such as real-world sensor inputs, or make approximations to decrease the computation load in the analysis of cumbersome data sets. These approximation-tolerant applications span application domains, such as machine learning, image processing, robotics, and financial analysis, among others. Since the advent of the modern processor, computing models have largely presumed the attribute of accuracy. A willingness to relax accuracy requirements, however, with goal of gaining energy efficiency, warrants the re-investigation of the potential of analog computing. Analog hardware offers the opportunity for fast and low-power computation; however, it presents challenges in the form of accuracy. Where analog compute blocks have been applied to solve fixed-function problems, general-purpose computing has relied on digital hardware implementations that provide generality and programmability. The work presented in this thesis aims to answer the following questions: Can analog circuits be successfully integrated into general-purpose computing to provide performance and energy savings? And, what is required to address the historical analog challenges of inaccuracy, programmability, and a lack of generality to enable such an approach? This thesis work investigates a neural approach as a means to address the historical analog challenges of inaccuracy, programmability, and generality and to enable the use of analog circuits in general-purpose, high-performance computing. The first piece of this thesis work investigates the use of analog circuits at the microarchitecture level in the form of an analog neural branch predictor. The task of branch prediction can tolerate imprecision, as roll-back mechanisms correct for branch mispredictions, and application-level accuracy remains unaffected. We show that analog circuits enable the implementation of a highly-accurate, neural-prediction algorithm that is infeasible to implement in the digital domain. The second piece of this thesis work presents a neural accelerator that targets approximation-tolerant code. Analog neural acceleration provides application speedup of 3.3x and energy savings of 12.1x with a quality loss less than 10% for all except one approximation-tolerant benchmark. These results show that, using a neural approach, analog circuits can be applied to provide performance and energy efficiency in high-performance, general-purpose computing.Computer Science
Overcoming Noise and Variations In Low-Precision Neural Networks
This work explores the impact of various design and training choices on the resilience of a neural network when subjected to noise and/or device variations. Simulations were performed under the expectation that the neural network would be implemented on analog hardware; this context asserts that there will be random noise within the circuit as well as variations in device characteristics between each fabricated device. The results show how noise can be added during the training process to reduce the impact of post-training noise. Architectural choices for the neural network also directly impact the performance variation between devices. The simulated neural networks were more robust to noise with a minimal architecture with fewer layers; if more neurons are needed for better fitting, networks with more neurons in shallow layers and fewer in deeper layers closer to the output tend to perform better. The paper also demonstrates that activation functions with lower slopes do a better job of suppressing noise in the neural network. It also shown that the accuracy can be made more consistent by introducing sparsity into the neural network. To that end, an evaluation is included of different methods for generating sparse architectures for smaller neural networks. A new method is proposed that consistently outperforms the most common methods used in larger, deeper networks.Ph.D