86 research outputs found

    Stochastic Spin-Orbit Torque Devices as Elements for Bayesian Inference

    Full text link
    Probabilistic inference from real-time input data is becoming increasingly popular and may be one of the potential pathways at enabling cognitive intelligence. As a matter of fact, preliminary research has revealed that stochastic functionalities also underlie the spiking behavior of neurons in cortical microcircuits of the human brain. In tune with such observations, neuromorphic and other unconventional computing platforms have recently started adopting the usage of computational units that generate outputs probabilistically, depending on the magnitude of the input stimulus. In this work, we experimentally demonstrate a spintronic device that offers a direct mapping to the functionality of such a controllable stochastic switching element. We show that the probabilistic switching of Ta/CoFeB/MgO heterostructures in presence of spin-orbit torque and thermal noise can be harnessed to enable probabilistic inference in a plethora of unconventional computing scenarios. This work can potentially pave the way for hardware that directly mimics the computational units of Bayesian inference

    Stochastic Domain Wall-Magnetic Tunnel Junction Artificial Neurons for Noise-Resilient Spiking Neural Networks

    Full text link
    The spatiotemporal nature of neuronal behavior in spiking neural networks (SNNs) make SNNs promising for edge applications that require high energy efficiency. To realize SNNs in hardware, spintronic neuron implementations can bring advantages of scalability and energy efficiency. Domain wall (DW) based magnetic tunnel junction (MTJ) devices are well suited for probabilistic neural networks given their intrinsic integrate-and-fire behavior with tunable stochasticity. Here, we present a scaled DW-MTJ neuron with voltage-dependent firing probability. The measured behavior was used to simulate a SNN that attains accuracy during learning compared to an equivalent, but more complicated, multi-weight (MW) DW-MTJ device. The validation accuracy during training was also shown to be comparable to an ideal leaky integrate and fire (LIF) device. However, during inference, the binary DW-MTJ neuron outperformed the other devices after gaussian noise was introduced to the Fashion-MNIST classification task. This work shows that DW-MTJ devices can be used to construct noise-resilient networks suitable for neuromorphic computing on the edge.Comment: 10 pages, 4 figure

    Leveraging Probabilistic Switching in Superparamagnets for Temporal Information Encoding in Neuromorphic Systems

    Full text link
    Brain-inspired computing - leveraging neuroscientific principles underpinning the unparalleled efficiency of the brain in solving cognitive tasks - is emerging to be a promising pathway to solve several algorithmic and computational challenges faced by deep learning today. Nonetheless, current research in neuromorphic computing is driven by our well-developed notions of running deep learning algorithms on computing platforms that perform deterministic operations. In this article, we argue that taking a different route of performing temporal information encoding in probabilistic neuromorphic systems may help solve some of the current challenges in the field. The article considers superparamagnetic tunnel junctions as a potential pathway to enable a new generation of brain-inspired computing that combines the facets and associated advantages of two complementary insights from computational neuroscience -- how information is encoded and how computing occurs in the brain. Hardware-algorithm co-design analysis demonstrates 97.41%97.41\% accuracy of a state-compressed 3-layer spintronics enabled stochastic spiking network on the MNIST dataset with high spiking sparsity due to temporal information encoding

    A spintronic Huxley-Hodgkin-analogue neuron implemented with a single magnetic tunnel junction

    Full text link
    Spiking neural networks aim to emulate the brain's properties to achieve similar parallelism and high-processing power. A caveat of these neural networks is the high computational cost to emulate, while current proposals for analogue implementations are energy inefficient and not scalable. We propose a device based on a single magnetic tunnel junction to perform neuron firing for spiking neural networks without the need of any resetting procedure. We leverage two physics, magnetism and thermal effects, to obtain a bio-realistic spiking behavior analogous to the Huxley-Hodgkin model of the neuron. The device is also able to emulate the simpler Leaky-Integrate and Fire model. Numerical simulations using experimental-based parameters demonstrate firing frequency in the MHz to GHz range under constant input at room temperature. The compactness, scalability, low cost, CMOS-compatibility, and power efficiency of magnetic tunnel junctions advocate for their broad use in hardware implementations of spiking neural networks.Comment: 23 pages, 6 figures, 2 table

    Spiking Dynamics in Dual Free Layer Perpendicular Magnetic Tunnel Junctions

    Full text link
    Spintronic devices have recently attracted a lot of attention in the field of unconventional computing due to their non-volatility for short and long term memory, non-linear fast response and relatively small footprint. Here we report how voltage driven magnetization dynamics of dual free layer perpendicular magnetic tunnel junctions enable to emulate spiking neurons in hardware. The output spiking rate was controlled by varying the dc bias voltage across the device. The field-free operation of this two terminal device and its robustness against an externally applied magnetic field make it a suitable candidate to mimic neuron response in a dense Neural Network (NN). The small energy consumption of the device (4-16 pJ/spike) and its scalability are important benefits for embedded applications. This compact perpendicular magnetic tunnel junction structure could finally bring spiking neural networks (SNN) to sub-100nm size elements
    • …
    corecore