37 research outputs found

    Circuit-Level Evaluation of the Generation of Truly Random Bits with Superparamagnetic Tunnel Junctions

    Full text link
    Many emerging alternative models of computation require massive numbers of random bits, but their generation at low energy is currently a challenge. The superparamagnetic tunnel junction, a spintronic device based on the same technology as spin torque magnetoresistive random access memory has recently been proposed as a solution, as this device naturally switches between two easy to measure resistance states, due only to thermal noise. Reading the state of the junction naturally provides random bits, without the need of write operations. In this work, we evaluate a circuit solution for reading the state of superparamagnetic tunnel junction. We see that the circuit may induce a small read disturb effect for scaled superparamagnetic tunnel junctions, but this effect is naturally corrected in the whitening process needed to ensure the quality of the generated random bits. These results suggest that superparamagnetic tunnel junctions could generate truly random bits at 20 fJ/bit, including overheads, orders of magnitudes below CMOS-based solutions

    Embracing the Unreliability of Memory Devices for Neuromorphic Computing

    Full text link
    The emergence of resistive non-volatile memories opens the way to highly energy-efficient computation near- or in-memory. However, this type of computation is not compatible with conventional ECC, and has to deal with device unreliability. Inspired by the architecture of animal brains, we present a manufactured differential hybrid CMOS/RRAM memory architecture suitable for neural network implementation that functions without formal ECC. We also show that using low-energy but error-prone programming conditions only slightly reduces network accuracy

    Low voltage 25Gbps silicon Mach-Zehnder modulator in the O-band

    Get PDF
    In this work, a 25 Gb ps silicon push-pull Mach-Zehnder modulator operating in the O-Band (1260 nm - 1360 nm) of optical communications and fabricated on a 300 mm platform is presented. The measured modulation efficiency (VÏ€ LÏ€) was comprised between 0.95 V cm and 1.15 V cm, which is comparable to the state-of-the-art modulators in the C-Band, that enabled its use with a driving voltage of 3.3 Vpp, compatible with BiCMOS technology. An extinction ratio of 5 dB and an on-chip insertion losses of 3.6 dB were then demonstrated at 25 Gbps.European project Plat4m (FP7-2012-318178); European project Cosmicc (H2020-ICT-27-2015- 688516); French Industry Ministry Nano2017 progra

    Conception de circuits neuromorphiques numériques exploitant des nano-composants émergents

    Full text link
    While electronics has prospered inexorably for several decades, its leading source of progress will stop in the next coming years, due to the fundamental technological limits of transistors. Nevertheless, microelectronics is currently offering a major breakthrough: in recent years, memory technologies have undergone incredible progress, opening the way for multiple research venues in embedded systems. Additionally, a major feature for future years will be the ability to integrate different technologies on the same chip. new emerging memory devices that can be embedded in the core of the CMOS, such as Resistive Random Access Memory (RRAM) or Spin Torque Magnetic Tunnel Junction (STMRAM) based on naturally intelligent inmemory-computing architecture. Three braininspired algorithms are carefully examined: Bayesian reasoning binarized neural networks, and an approach that further exploits the intrinsic behavior of components, population coding of neurons. Each of these approaches explores different aspects of in-memory computing.Depuis les années soixante-dix l'évolution des performances des circuits électroniques repose exclusivement sur l'amélioration des performances des transistors. Ce composant a des propriétés extraordinaires puisque lorsque ses dimensions sont réduites, toutes ses caractéristiques sont améliorées. Mais, dû à certaines limites physiques fondamentales, la diminution des dimensions des transistors n’est plus possible. Néanmoins, de nouveaux nano-composants mémoire innovants qui peuvent être intégré conjointement avec les transistors voient le jour tant au niveau académique qu'industriel, ce qui constitue une opportunité pour repenser complètement l'architecture des circuits électroniques actuels. L'une des voies de recherche possible est l’inspiration du fonctionnement du cerveau biologique. Ce dernier peut accomplir des tâches complexes et variées en consommant très peu d’énergie. Ces travaux de thèse explorent trois paradigmes neuro-inspirés pour l'utilisation de ces composants mémoire. Chacune de ces approches explore différentes problématiques du calcul en mémoire

    Exploring learning techniques for edge AI taking advantage of NVMs

    Full text link
    International audienceThe relatively recent development and remarkable results of Artificial Neural Networks (ANNs) are dueto the construction of gigantic databases and algorithmic innovations requiring large hardware resources, which results in equally substantial energy consumptions. As Artificial Intelligence (AI) is now being embedded more and more into various connected objects, ranging from medical implants to autonomous cars, it is clear that the algorithmic and hardware solutions available in data centres will not be able to cover all the AI integration needs. The field of microelectronics has been working for several years now on the development of emerging memory technologies with the aim of integrating Non-Volatile Memory (NVM) within computing units. In a conventional processor architecture, such co-integration between the computation units and the memory would simplify the memory hierarchy, but also increase the bandwidth between computation and data access. In this study, we explore the potential of two non-volatile memory technologies, HfO2-based FeRAM [1] and OxRAM [2], for enabling on-chip learning systems. Notably, the quasi-infinite reading endurance of OxRAM devices and their poor writing endurance makes them suitable for inference-only applications, whereas the reported large writing endurance of FeRAM device would effectively allow moving training on-chip as well. Eventually, the migration of inference and learning from data centres to edge devices will allow them to adapt to the evolution of input data, to specialize each device to its user, to retain private data and offer faster service.To validate the feasibility of this approach, we designed a test chip in the 22nm FDSOI technology node.The primary objective of this chip is to demonstrate the implementation of a hybrid FeRAM/OxRAMmemory circuit capable of storing the synaptic weights of a Neural Network (NN) duringlearning/inference phases, while accelerating NN training at the edge. Eventually, by incorporatingsynaptic metaplasticity in Binarized Neural Networks [3], the chip addresses the issue of catastrophicforgetting. The chip consists of two sub-cores, each comprising four 16kbit FeRAM arrays and one16kbit OxRAM array. One FeRAM array and the OxRAM array can be operated simultaneously. Thecircuit leverages the OxRAM array to build a near-memory computing inference engine to accelerate theinference/feedforward pass of training, while FeRAM arrays store an 8-bit quantized version of thefloating-point weights optimized during training

    Neural-like computing with populations of superparamagnetic basis functions

    Full text link
    Population coding, where populations of artificial neurons process information collectively can facilitate robust data processing, but require high circuit overheads. Here, the authors realize this approach with reduced circuit area and power consumption, by utilizing superparamagnetic tunnel junction based neurons

    Experimental Demonstration of Multilevel Resistive Random Access Memory Programming for up to Two Months Stable Neural Networks Inference Accuracy

    Full text link
    International audienceIn recent years, artificial intelligence has reached significant milestones with the development of deep neural networks, but it suffers from a major limitation: its considerable energy consumption. [1] This limitation is primarily due to the energy cost of exchanging information between computation and memory units. [2,3] Memristors, also called resistive random access memories (RRAMs) in industrial laboratories, now provide an opportunity to increase the energy efficiency of AI dramatically. In contrast to the complementary metal-oxide-semiconductor (CMOS)based memories such as static or dynamic random access memories, which store one bit per unit cell, they can be programmed to intermediate states between their lowest and highest resistance values, allowing memorizing the synaptic weights of a neural network in a particularly compact manner. [4] In addition, using the fundamental laws of electric circuits, arrays of memristors can implement deep learning's most basic operation, multiply and accumulate (MAC): the multiply operation corresponds to Ohm's law, whereas the accumulate operation corresponds to Kirchhoff 's current law. This type of "in-memory" computation consumes less power than equivalent digital implementations [5-9] : the computation is performed directly within memory, allowing the suppression of the energy associated with weight movement. [4,10,11] Moreover, nonvolatility offers an instant on/off feature: memristor-based systems can perform inference immediately after being turned on, allowing to cut the power supply entirely as soon as the system is not used
    corecore