366 research outputs found
Building Reservoir Computing Hardware Using Low Energy-Barrier Magnetics
Biologically inspired recurrent neural networks, such as reservoir computers
are of interest in designing spatio-temporal data processors from a hardware
point of view due to the simple learning scheme and deep connections to Kalman
filters. In this work we discuss using in-depth simulation studies a way to
construct hardware reservoir computers using an analog stochastic neuron cell
built from a low energy-barrier magnet based magnetic tunnel junction and a few
transistors. This allows us to implement a physical embodiment of the
mathematical model of reservoir computers. Compact implementation of reservoir
computers using such devices may enable building compact, energy-efficient
signal processors for standalone or in-situ machine cognition in edge devices.Comment: To be presented at International Conference on Neuromorphic Systems
202
Bayesian Sensor Fusion with Fast and Low Power Stochastic Circuits
International audience—As the physical limits of Moore's law are being reached, a research effort is launched to achieve further performance improvements by exploring computation paradigms departing from standard approaches. The BAMBI project (Bottom-up Approaches to Machines dedicated to Bayesian Inference) aims at developing hardware dedicated to probabilistic computation , which extends logic computation realised by boolean gates in current computer chips. Such probabilistic computing devices would allow to solve faster and at a lower energy cost a wide range of Artificial Intelligence applications, especially when decisions need to be taken from incomplete data in an uncertain environment. This paper describes an architecture where very simple operators compute on a time coding of probability values as stochastic signals. Simulation tests and a reconfigurable logic hardware implementation demonstrated the feasibility and performances of the proposed inference machine. Hardware results show this architecture can quickly solve Bayesian sensor fusion problems and is very efficient in terms of energy consumption
Dual sampling neural network: Learning without explicit optimization
脳型人工知能の実現に向けた新理論の構築に成功 --ヒントは脳のシナプスの「揺らぎ」--. 京都大学プレスリリース. 2022-10-24.Artificial intelligence using neural networks has achieved remarkable success. However, optimization procedures of the learning algorithms require global and synchronous operations of variables, making it difficult to realize neuromorphic hardware, a promising candidate of low-cost and energy-efficient artificial intelligence. The optimization of learning algorithms also fails to explain the recently observed criticality of the brain. Cortical neurons show a critical power law implying the best balance between expressivity and robustness of the neural code. However, the optimization gives less robust codes without the criticality. To solve these two problems simultaneously, we propose a model neural network, dual sampling neural network, in which both neurons and synapses are commonly represented as a probabilistic bit like in the brain. The network can learn external signals without explicit optimization and stably retain memories while all entities are stochastic because seemingly optimized macroscopic behavior emerges from the microscopic stochasticity. The model reproduces various experimental results, including the critical power law. Providing a conceptual framework for computation by microscopic stochasticity without macroscopic optimization, the model will be a fundamental tool for developing scalable neuromorphic devices and revealing neural computation and learning
Double-Free-Layer Stochastic Magnetic Tunnel Junctions with Synthetic Antiferromagnets
Stochastic magnetic tunnel junctions (sMTJ) using low-barrier nanomagnets
have shown promise as fast, energy-efficient, and scalable building blocks for
probabilistic computing. Despite recent experimental and theoretical progress,
sMTJs exhibiting the ideal characteristics necessary for probabilistic bits
(p-bit) are still lacking. Ideally, the sMTJs should have (a) voltage bias
independence preventing read disturbance (b) uniform randomness in the
magnetization angle between the free layers, and (c) fast fluctuations without
requiring external magnetic fields while being robust to magnetic field
perturbations. Here, we propose a new design satisfying all of these
requirements, using double-free-layer sMTJs with synthetic antiferromagnets
(SAF). We evaluate the proposed sMTJ design with experimentally benchmarked
spin-circuit models accounting for transport physics, coupled with the
stochastic Landau-Lifshitz-Gilbert equation for magnetization dynamics. We find
that the use of low-barrier SAF layers reduces dipolar coupling, achieving
uncorrelated fluctuations at zero-magnetic field surviving up to diameters
exceeding ( nm) if the nanomagnets can be made thin enough
(- nm). The double-free-layer structure retains bias-independence
and the circular nature of the nanomagnets provides near-uniform randomness
with fast fluctuations. Combining our full sMTJ model with advanced transistor
models, we estimate the energy to generate a random bit as 3.6 fJ,
with fluctuation rates of 3.3 GHz per p-bit. Our results will guide
the experimental development of superior stochastic magnetic tunnel junctions
for large-scale and energy-efficient probabilistic computation for problems
relevant to machine learning and artificial intelligence
- …