37 research outputs found

    Overcoming device unreliability with continuous learning in a population coding based computing system

    Full text link
    The brain, which uses redundancy and continuous learning to overcome the unreliability of its components, provides a promising path to building computing systems that are robust to the unreliability of their constituent nanodevices. In this work, we illustrate this path by a computing system based on population coding with magnetic tunnel junctions that implement both neurons and synaptic weights. We show that equipping such a system with continuous learning enables it to recover from the loss of neurons and makes it possible to use unreliable synaptic weights (i.e. low energy barrier magnetic memories). There is a tradeoff between power consumption and precision because low energy barrier memories consume less energy than high barrier ones. For a given precision, there is an optimal number of neurons and an optimal energy barrier for the weights that leads to minimum power consumption

    Quantum reservoir neural network implementation on a Josephson mixer

    Full text link
    Quantum reservoir computing is a promising approach to quantum neural networks capable of solving hard learning tasks on both classical and quantum input data. However, current approaches with qubits are limited by low connectivity. We propose an implementation for quantum reservoir that obtains a large number of densely connected neurons by using parametrically coupled quantum oscillators instead of physically coupled qubits. We analyse a specific hardware implementation based on superconducting circuits. Our results give the coupling and dissipation requirements in the system and show how they affect the performance of the quantum reservoir. Beyond quantum reservoir computation, the use of parametrically coupled bosonic modes holds promise for realizing large quantum neural network architectures

    Circuit-Level Evaluation of the Generation of Truly Random Bits with Superparamagnetic Tunnel Junctions

    Full text link
    Many emerging alternative models of computation require massive numbers of random bits, but their generation at low energy is currently a challenge. The superparamagnetic tunnel junction, a spintronic device based on the same technology as spin torque magnetoresistive random access memory has recently been proposed as a solution, as this device naturally switches between two easy to measure resistance states, due only to thermal noise. Reading the state of the junction naturally provides random bits, without the need of write operations. In this work, we evaluate a circuit solution for reading the state of superparamagnetic tunnel junction. We see that the circuit may induce a small read disturb effect for scaled superparamagnetic tunnel junctions, but this effect is naturally corrected in the whitening process needed to ensure the quality of the generated random bits. These results suggest that superparamagnetic tunnel junctions could generate truly random bits at 20 fJ/bit, including overheads, orders of magnitudes below CMOS-based solutions

    RF signal classification in hardware with an RF spintronic neural network

    Full text link
    Extracting information from radiofrequency (RF) signals using artificial neural networks at low energy cost is a critical need for a wide range of applications. Here we show how to leverage the intrinsic dynamics of spintronic nanodevices called magnetic tunnel junctions to process multiple analogue RF inputs in parallel and perform synaptic operations. Furthermore, we achieve classification of RF signals with experimental data from magnetic tunnel junctions as neurons and synapses, with the same accuracy as an equivalent software neural network. These results are a key step for embedded radiofrequency artificial intelligence.Comment: 8 pages, 5 figure

    Multilayer spintronic neural networks with radio-frequency connections

    Full text link
    Spintronic nano-synapses and nano-neurons perform complex cognitive computations with high accuracy thanks to their rich, reproducible and controllable magnetization dynamics. These dynamical nanodevices could transform artificial intelligence hardware, provided that they implement state-of-the art deep neural networks. However, there is today no scalable way to connect them in multilayers. Here we show that the flagship nano-components of spintronics, magnetic tunnel junctions, can be connected into multilayer neural networks where they implement both synapses and neurons thanks to their magnetization dynamics, and communicate by processing, transmitting and receiving radio frequency (RF) signals. We build a hardware spintronic neural network composed of nine magnetic tunnel junctions connected in two layers, and show that it natively classifies nonlinearly-separable RF inputs with an accuracy of 97.7%. Using physical simulations, we demonstrate that a large network of nanoscale junctions can achieve state-of the-art identification of drones from their RF transmissions, without digitization, and consuming only a few milliwatts, which is a gain of more than four orders of magnitude in power consumption compared to currently used techniques. This study lays the foundation for deep, dynamical, spintronic neural networks
    corecore