14 research outputs found

    Design and implementation of multipattern generators in analog VLSI

    Get PDF
    Journal ArticleIn recent years, computational biologists have shown through simulation that small neural networks with fixed connectivity are capable of producing multiple output rhythms in response to transient inputs. It is believed that such networks may play a key role in certain biological behaviors such as dynamic gait control. In this paper, we present a novel method for designing continuous-time recurrent neural networks (CTRNNs) that contain multiple embedded limit cycles, and we show that it is possible to switch the networks between these embedded limit cycles with simple transient inputs. We also describe the design and testing of a fully integrated four-neuron CTRNN chip that is used to implement the neural network pattern generators. We provide two example multipattern generators and show that the measured waveforms from the chip agree well with numerical simulations

    Analog VLSI neural network with digital perturbative learning

    Full text link

    Multiplexed gradient descent: Fast online training of modern datasets on hardware neural networks without backpropagation

    Full text link
    We present multiplexed gradient descent (MGD), a gradient descent framework designed to easily train analog or digital neural networks in hardware. MGD utilizes zero-order optimization techniques for online training of hardware neural networks. We demonstrate its ability to train neural networks on modern machine learning datasets, including CIFAR-10 and Fashion-MNIST, and compare its performance to backpropagation. Assuming realistic timescales and hardware parameters, our results indicate that these optimization techniques can train a network on emerging hardware platforms orders of magnitude faster than the wall-clock time of training via backpropagation on a standard GPU, even in the presence of imperfect weight updates or device-to-device variations in the hardware. We additionally describe how it can be applied to existing hardware as part of chip-in-the-loop training, or integrated directly at the hardware level. Crucially, the MGD framework is highly flexible, and its gradient descent process can be optimized to compensate for specific hardware limitations such as slow parameter-update speeds or limited input bandwidth

    FPGA implementation of a LSTM Neural Network

    Get PDF
    Este trabalho pretende fazer uma implementação customizada, em Hardware, duma Rede Neuronal Long Short-Term Memory. O modelo python, assim como a descrição Verilog, e síntese RTL, encontram-se terminadas. Falta apenas fazer o benchmarking e a integração de um sistema de aprendizagem

    Analog Signal Processor for Adaptive Antenna Arrays

    Get PDF
    An analog circuit for beamforming in a mobile Ku band satellite TV antenna array has been implemented. The circuit performs continuous-time gradient descent using simultaneous perturbation gradient estimation. Simulations were performed using Agilent ADS circuit simulator. Field tests were performed in a realistic scenario using a satellite signal. The results were comparable to the simulation predictions and to results obtained using a digital implementation of a similar stochastic approximation algorithm

    Continuous-valued probabilistic neural computation in VLSI

    Get PDF

    Pulse-stream binary stochastic hardware for neural computation the Helmholtz Machine

    Get PDF

    Palmo : a novel pulsed based signal processing technique for programmable mixed-signal VLSI

    Get PDF
    In this thesis a new signal processing technique is presented. This technique exploits the use of pulses as the signalling mechanism. This Palmo 1 signalling method applied to signal processing is novel, combining the advantages of both digital and analogue techniques. Pulsed signals are robust, inherently low-power, easily regenerated, and easily distributed across and between chips. The Palmo cells used to perform analogue operations on the pulsed signals are compact, fast, simple and programmable

    Neurone analogique robuste et technologies émergentes pour les architectures neuromorphiques

    Get PDF
    Les récentes évolutions en microélectronique nécessitent une attention particulière lors de la conception d un circuit. Depuis les noeuds technologiques de quelques dizaines de nanomètres, les contraintes de consommation deviennent prépondérantes. Pour répondre à ce problème, les concepteurs se penchent aujourd hui sur l utilisation d architectures multi-coeurs hétérogènes incluant des accélérateurs matériels dotés d une grande efficacité énergétique. Le maintien des spécifications d un circuit apparait également essentiel à l heure où sa fabrication est de plus en plus sujette à la variabilité et aux défauts. Il existe donc un réel besoin pour des accélérateurs robustes. Les architectures neuromorphiques, et notamment les réseaux de neurones à impulsions, offrent une bonne tolérance aux défauts, de part leur parallélisme massif, et une aptitude à exécuter diverses applications à faible coût énergétique. La thèse défendue se présente sous deux aspects. Le premier consiste en la conception d un neurone analogique robuste et à son intégration dans un accélérateur matériel neuro-inspiré à des fins calculatoires. Cet opérateur mathématique à basse consommation a été dimensionné puis dessiné en technologie 65 nm. Intégré au sein de deux circuits, il a pu être caractérisé dans l un d entre eux et ainsi démontrer la faisabilité d opérations mathématiques élémentaires. Le second objectif est d estimer, à plus long terme, l impact des nouvelles technologies sur le développement de ce type d architecture. Ainsi, les axes de recherches suivis ont permis d étudier un passage vers un noeud technologique très avancé, les opportunités procurées par des Through-Silicon-Vias ou encore, l utilisation de mémoires résistives à changement de phase ou à filament conducteur.Due to the latest evolutions in microelectronic field, a special care has to be given to circuit designs. In aggressive technology nodes down to dozen of nanometres, a recent need of high energy efficiency has emerged. Consequently designers are currently exploring heterogeneous multi-cores architectures based on accelerators. Besides this problem, variability has also become a major issue. It is hard to maintain a specification without using an overhead in term of surface and/or power consumption. Therefore accelerators should be robust against fabrication defects. Neuromorphic architectures, especially spiking neural networks, address robustness and power issues by their massively parallel and hybrid computation scheme. As they are able to tackle a broad scope of applications, they are good candidates for next generation accelerators. This PhD thesis will present two main aspects. Our first and foremost objectives were to specify and design a robust analog neuron for computational purposes. It was designed and simulated in a 65 nm process. Used as a mathematical operator, the neuron was afterwards integrated in two versatile neuromorphic architectures. The first circuit has been characterized and performed some basic computational operators. The second part explores the impact of emerging devices in future neuromorphic architectures. The starting point was a study of the scalability of the neuron in advanced technology nodes ; this approach was then extended to several technologies such as Through-Silicon-Vias or resistive memories.SAVOIE-SCD - Bib.électronique (730659901) / SudocGRENOBLE1/INP-Bib.électronique (384210012) / SudocGRENOBLE2/3-Bib.électronique (384219901) / SudocSudocFranceF
    corecore