18 research outputs found

    Intégration CMOS analogique de réseaux de neurones à cliques

    Get PDF
    Artificial neural networks solve problems that classical processors cannot solve without using a huge amount of resources. For instance, multiple-signal analysis and classification are such problems. Moreover, artificial neural networks are more and more integrated on-chip. They aim therefore at increasing processors computational abilities or processing data in embedded systems. In embedded systems, circuit area and energy consumption are critical parameters. However, the amount of connections between neurons is very high. Besides, circuit integration is difficult due to weighted connections and complex activation functions. These limitations exist for most artificial neural networks models and are thus an issue for the integration of a neural network composed of a high number of neurons (hundreds of them or more). Clique-based neural networks are a model of artificial neural networks reducing the network density, in terms of connections between neurons. Its information storage capacity is moreover greater than that of a standard artificial neural networks model such as Hopfield neural networks. This model is therefore suited to implement a high number of neurons on chip, leading to low-complexity and low-energy consumption circuits. In this document, we introduce a mixed-signal circuit implementing clique-based neural networks. We also show several generic network architectures implementing a network of any number of neurons. We can therefore implement clique-based neural networks of up to thousands of neurons consuming little energy. In order to validate the proposed implementation, we have fabricated a 30-neuron clique-based neural network prototype integrated on chip for the Si 65-nm CMOS 1-V supply process. The circuit shows decoding performances similar to the theoretical model and executes the message recovery process in 58 ns. Moreover, the entire network occupies a silicon area of 16,470 µm² and consumes 145 µW, yielding a measured energy consumption per neuron of 423 fJ maximum. These results show that the fabricated circuit is ten times more efficient in terms of occupied silicon area and latency than a digital equivalent circuit.Les réseaux de neurones artificiels permettent de résoudre des problèmes que des processeurs classiques ne peuvent pas résoudre sans utiliser une quantité considérable de ressources matérielles. L'analyse et la classification de multiples signaux en sont des exemples. Ces réseaux sont de plus en plus implantés sur des circuits intégrés. Ils ont ainsi pour but d'augmenter les capacités de calcul de processeurs ou d'effectuer leur traitement dans des systèmes embarqués. Dans un contexte d'application embarquée, la surface et la consommation d'énergie du circuit sont prépondérantes. Cependant, le nombre de connexions entre les neurones est élevé. De plus, les poids synaptiques ainsi que les fonctions d'activation utilisées rendent les implantations sur circuit complexes. Ces aspects, communs dans la plupart des modèles de réseaux de neurones, limitent l'intégration d'un réseau contenant un nombre de neurones de l'ordre de la centaine. Le modèle des réseaux de neurones à cliques permet de réduire la densité de connexions au sein d'un réseau, tout en gardant une capacité de stockage d'information plus grande que les réseaux de Hopfield, qui est un modèle standard de réseaux de neurones. Ce modèle est donc approprié pour implanter un réseau de grande taille, à condition de l'intégrer de façon à garder la faible complexité de ses fonctions, pour consommer un minimum d'énergie. Dans ce document, nous proposons un circuit mixte analogique/numérique implantant le modèle des réseaux de neurones à cliques. Nous proposons également plusieurs architectures de réseau pouvant contenir un nombre indéterminé de neurones. Cela nous permet de construire des réseaux de neurones à cliques contenant jusqu'à plusieurs milliers de neurones et consommant peu d'énergie. Pour valider les concepts décrits dans ce document, nous avons fabriqué et testé un prototype d'un réseau de neurones à cliques contenant trente neurones sur puce. Nous utilisons pour cela la technologie Si CMOS 65 nm, avec une tension d'alimentation de 1 V. Le circuit a des performances de récupération de l'information similaires à celles du modèle théorique, et effectue la récupération d'un message en 58 ns. Le réseau de neurones occupe une surface de silicium de 16 470 µm² et consomme 145 µW. Ces mesures attestent une consommation d'énergie par neurone de 423 fJ au maximum. Ces résultats montrent que le circuit produit est dix fois plus efficace qu'un équivalent numérique en termes de surface de silicium occupée et de latence

    Design of analog subthreshold encoded neural network circuit in sub-100nm CMOS

    No full text
    International audienceEncoded Neural Networks (ENN) associate lowcomplexity algorithm with a storage capacity much larger than Hopfield Neural Networks' (HNN) for the same number of nodes. They are thus promising for implementing large scale neural networks mimicking the functioning of the human brain. The implementation of such network on chip requires reducing the power consumption of the nodes to the femtojoule range to compare to human brain figures. Moreover, the circuit area must be reduced as much as possible. To address these challenges, this paper proposes a subthreshold analog ENN designed for the ST 65nm CMOS process. The designed circuit accepts power supply between 0.3V and 0.86V with currents below 300nA, yielding a 32fJ energy consumption per decoding per node. The ENN converges only 21ns after being stimulated. Finally, the node has a surface area of only 9.5µm^

    Distributed Artificial Intelligence Integrated Circuits For Ultra-Low-Power Smart Sensors

    No full text
    International audienceWireless sensor networks (WSN) could be defined as networks of autonomous devices that can sense and/or act on a physical or environmental conditions cooperatively. .To make these sensors smarter, Artificial Intelligence (AI) is used to process sensed data but also to solve challenges such as security, energy aware rooting, ect. This paper present a solution of using AI in the case of distributed sensors networks that aims to tackle these challenge

    Ultra-Low-Energy Mixed-Signal IC Implementing Encoded Neural Networks

    No full text
    International audienceEncoded Neural Networks (ENNs) associate lowcomplexity algorithm with a storage capacity much larger than Hopfield Neural Networks (HNNs) for the same number of nodes. Moreover, they have a lower density than HNNs in terms of connections, allowing a low-complexity circuit integration. The implementation of such a network requires low-complexity elements to take complete advantage of the assets of the model. This paper proposes an analog implementation of the ENNs. It is shown that this type of implementation is suitable for building network of thousands of nodes. To validate the proposed implementation, a prototype ENN of 30 computation nodes is designed, fabricated and tested on-chip for the ST 65-nm 1- V supply complementary metal-oxide silicon (CMOS) process. The circuit shows decoding performance similar to that of the theoretical model, and decodes one message in 58ns. Moreover, the entire network occupies a silicon area of 16470 µm² and consumes 145 µW, yielding a measured energy consumption per synaptic event per computation node of 68 fJ

    Analog implementation of encoded neural networks

    No full text
    International audienceEncoded neural networks mix the principles of associative memories and error-correcting decoders. Their storage capacity has been shown to be much larger than Hopfield Neural Networks’. This paper introduces an analog implementation of this new type of network. The proposed circuit has been designed for the 1V supply ST CMOS 65nm process. It consumes 1165 times less energy than a digital equivalent circuit while being 2.7 times more efficient in terms of combined speed and surface

    Energy Efficient Associative Memory Based on Neural Cliques

    No full text
    International audienceTraditional memories use an address to index the stored data. Associative memories rely on a different principle: a part of previously stored data is used to retrieve the remaining part. They are widely used, for instance, in network routers for packet forwarding. A classical way to implement such memories is Content-Addressable Memory (CAM). Since its operation is fully parallel, the response is obtained in a single clock cycle. However, this comes at the cost of energy consumption. This work proposes to use a recent type of neural networks as a novel way to implement associative memories. Thanks to an efficient retrieval algorithm guided by the information being searched, they are a good candidate for low-power associative memory. Compared to CAM-based system, analog implementation of 12kb neuro- inspired memory designed for 65nm CMOS technology, offers 48% energy savings

    Keyword Spotting System using Low-complexity Feature Extraction and Quantized LSTM

    No full text
    International audienceLong Short-Term Memory (LSTM) neural networks offer state-of-the-art results to compute sequential data and address applications like keyword spotting. Mel Frequency Cepstral Coefficients (MFCC) are the most common features used to train this neural network model. However, the complexity of MFCC coupled with highly optimized machine learning neural networks usually makes the MFCC feature extraction the most power-consuming block of the system. This paper presents a low complexity feature extraction method using a filter bank composed of 16 channels with a quality factor of 1.3 to compute a spectrogram. It shows that we can achieve an 89.45% accuracy on 12 classes of the Google Speech Command Dataset using an LSTM network of 64 hidden units with weights and activation quantized to 9 bits and inputs quantized to 8 bits

    Antidictionary-Based Cardiac Arrhythmia Classification for Smart ECG Sensors

    No full text
    International audienceCardiovascular diseases can be detected early by analyzing the electrocardiogram of a patient using wearable systems. In the context of smart sensors, detecting arrhythmias with good accuracy and ultra-low power consumption is required for long-term monitoring. This paper presents a novel cardiac arrhythmia classification method based on antidictionaries. The features are sequences of consecutive slopes that are generated from event-driven processing of the input signal. The proposed system shows an average detection accuracy of 98% while offering an ultra-low complexity. This antidictionary-based method is also particularly suited to imbalanced datasets since the antidictionaries are created exclusively from heartbeats classified as normal beats

    A fully flexible circuit implementation of clique-based neural networks in 65-nm CMOS

    No full text
    International audienceClique-based neural networks implement low- complexity functions working with a reduced connectivity be- tween neurons. Thus, they address very specific applications operating with a very low energy budget. This paper proposes a flexible and iterative neural architecture able to implement multiple types of clique-based neural networks of up to 3968 neurons. The circuit has been integrated in a ST 65-nm CMOS ASIC and validated in the context of ECG classification. The network core reacts in 83ns to a stimulation and occupies a 0.21mm 2 silicon area

    A 65-nm CMOS 7fJ per synaptic event clique-based neural network in scalable architecture

    No full text
    International audienceClique-based neural networks are less complex than commonly used neural network models. They have a limited connectivity and are composed of simple functions. They are thus adapted to implement neuro-inspired computation units operating under severe energy constraints. This paper shows an ST 65-nm CMOS ASIC implementation for a 30-neuron cliquebased neural network circuit. With a 0.8V power supply and 150nA unitary current, the neuron energy consumption is only 7fJ per synaptic event, i.e. 1330 times less energy than a state-ofthe-art neuron. The network occupies a 41,820µm² silicon area
    corecore