10 research outputs found

    Nontrivial spatial dependence of the spin torques in L10 FePt-based tunneling junctions

    Get PDF
    We present an ab initio study of the spin-transfer torque in Fe/MgO/FePt/Fe magnetic tunnel junctions. We consider an FePt film with a thickness up to six unit cells, either in direct contact with the MgO spacer or with an intercalated ultrathin Fe seed layer. We find that in the FePt layer the torque is not attenuated as strongly as in the case of pure Fe. Moreover, in FePt the torque alternates sign at the Fe and Pt atomic planes throughout the stack for all FePt thicknesses considered. Finally, when Fe is intercalated between MgO and L10 FePt, the torque is sharply attenuated, and it is transferred to FePt only for an Fe seed layer that is less than two atomic planes thick. We attribute these features to the different spatial profiles of the exchange and correlation field and the induced nonequilibrium spin accumulation. The calculated tunneling magnetoresistance of the Fe/MgO/FePt/Fe junctions studied is enhanced with respect to the one of Fe/MgO/Fe, while it is reduced with Fe intercalation. Our work shows that L10 FePt junctions can be promising candidates for current-operated magnetic devices and that the magnetic texture at the atomic scale has an important effect on the spin-transfer torque

    Exploiting multiple timescales in hierarchical echo state networks

    Get PDF
    Echo state networks (ESNs) are a powerful form of reservoir computing that only require training of linear output weights while the internal reservoir is formed of fixed randomly connected neurons. With a correctly scaled connectivity matrix, the neurons’ activity exhibits the echo-state property and responds to the input dynamics with certain timescales. Tuning the timescales of the network can be necessary for treating certain tasks, and some environments require multiple timescales for an efficient representation. Here we explore the timescales in hierarchical ESNs, where the reservoir is partitioned into two smaller linked reservoirs with distinct properties. Over three different tasks (NARMA10, a reconstruction task in a volatile environment, and psMNIST), we show that by selecting the hyper-parameters of each partition such that they focus on different timescales, we achieve a significant performance improvement over a single ESN. Through a linear analysis, and under the assumption that the timescales of the first partition are much shorter than the second’s (typically corresponding to optimal operating conditions), we interpret the feedforward coupling of the partitions in terms of an effective representation of the input signal, provided by the first partition to the second, whereby the instantaneous input signal is expanded into a weighted combination of its time derivatives. Furthermore, we propose a data-driven approach to optimise the hyper-parameters through a gradient descent optimisation method that is an online approximation of backpropagation through time. We demonstrate the application of the online learning rule across all the tasks considered

    Machine learning using magnetic stochastic synapses

    Get PDF
    The impressive performance of artificial neural networks has come at the cost of high energy usage and CO2 emissions. Unconventional computing architectures, with magnetic systems as a candidate, have potential as alternative energy-efficient hardware, but, still face challenges, such as stochastic behaviour, in implementation. Here, we present a methodology for exploiting the traditionally detrimental stochastic effects in magnetic domain-wall motion in nanowires. We demonstrate functional binary stochastic synapses alongside a gradient learning rule that allows their training with applicability to a range of stochastic systems. The rule, utilising the mean and variance of the neuronal output distribution, finds a trade-off between synaptic stochasticity and energy efficiency depending on the number of measurements of each synapse. For single measurements, the rule results in binary synapses with minimal stochasticity, sacrificing potential performance for robustness. For multiple measurements, synaptic distributions are broad, approximating better-performing continuous synapses. This observation allows us to choose design principles depending on the desired performance and the device's operational speed and energy cost. We verify performance on physical hardware, showing it is comparable to a standard neural network

    Neuromorphic computation with a single magnetic domain wall

    Get PDF
    Machine learning techniques are commonly used to model complex relationships but implementations on digital hardware are relatively inefficient due to poor matching between conventional computer architectures and the structures of the algorithms they are required to simulate. Neuromorphic devices, and in particular reservoir computing architectures, utilize the inherent properties of physical systems to implement machine learning algorithms and so have the potential to be much more efficient. In this work, we demonstrate that the dynamics of individual domain walls in magnetic nanowires are suitable for implementing the reservoir computing paradigm in hardware. We modelled the dynamics of a domain wall placed between two anti-notches in a nickel nanowire using both a 1D collective coordinates model and micromagnetic simulations. When driven by an oscillating magnetic field, the domain exhibits non-linear dynamics within the potential well created by the anti-notches that are analogous to those of the Duffing oscillator. We exploit the domain wall dynamics for reservoir computing by modulating the amplitude of the applied magnetic field to inject time-multiplexed input signals into the reservoir, and show how this allows us to perform machine learning tasks including: the classification of (1) sine and square waves; (2) spoken digits; and (3) non-temporal 2D toy data and hand written digits. Our work lays the foundation for the creation of nanoscale neuromorphic devices in which individual magnetic domain walls are used to perform complex data analysis tasks

    Voltage-controlled superparamagnetic ensembles for low-power reservoir computing

    Get PDF
    We propose thermally driven, voltage-controlled superparamagnetic ensembles as low-energy platforms for hardware-based reservoir computing. In the proposed devices, thermal noise is used to drive the ensembles' magnetization dynamics, while control of their net magnetization states is provided by strain-mediated voltage inputs. Using an ensemble of CoFeB nanodots as an example, we use analytical models and micromagnetic simulations to demonstrate how such a device can function as a reservoir and perform two benchmark machine learning tasks (spoken digit recognition and chaotic time series prediction) with competitive performance. Our results indicate robust performance on timescales from microseconds to milliseconds, potentially allowing such a reservoir to be tuned to perform a wide range of real-time tasks, from decision making in driverless cars (fast) to speech recognition (slow). The low energy consumption expected for such a device makes it an ideal candidate for use in edge computing applications that require low latency and power. The authors thank the Engineering and Physical Sciences Research Council (Grant No.: EP/S009647/1 and EP/V006339/1) for financial support. The project leading to this application has received funding from the European Union's Horizon 2020 research and innovation programme under Grant Agreement No. 861618 (SpinENGINE)

    The Landau–Lifshitz equation in atomistic models

    No full text
    The Landau–Lifshitz (LL) equation, originally proposed at the macrospin level, is increasingly used in Atomistic Spin Dynamic (ASD) models. These models are based on a spin Hamiltonian featuring atomic spins of fixed length, with the exchange introduced using the Heisenberg formalism. ASD models are proving a powerful approach to the fundamental understanding of ultrafast magnetization dynamics, including the prediction of the thermally induced magnetization switching phenomenon in which the magnetization is reversed using an ultrafast laser pulse in the absence of an externally applied field. This paper outlines the ASD model approach and considers the role and limitations of the LL equation in this context

    A perspective on physical reservoir computing with nanomagnetic devices

    Get PDF
    Neural networks have revolutionized the area of artificial intelligence and introduced transformative applications to almost every scientific field and industry. However, this success comes at a great price; the energy requirements for training advanced models are unsustainable. One promising way to address this pressing issue is by developing low-energy neuromorphic hardware that directly supports the algorithm's requirements. The intrinsic non-volatility, non-linearity, and memory of spintronic devices make them appealing candidates for neuromorphic devices. Here, we focus on the reservoir computing paradigm, a recurrent network with a simple training algorithm suitable for computation with spintronic devices since they can provide the properties of non-linearity and memory. We review technologies and methods for developing neuromorphic spintronic devices and conclude with critical open issues to address before such devices become widely used

    Quantifying the computational capability of a nanomagnetic reservoir computing platform with emergent magnetization dynamics

    Get PDF
    Devices based on arrays of interconnected magnetic nano-rings with emergent magnetization dynamics have recently been proposed for use in reservoir computing applications, but for them to be computationally useful it must be possible to optimise their dynamical responses. Here, we use a phenomenological model to demonstrate that such reservoirs can be optimised for classification tasks by tuning hyperparameters that control the scaling and input rate of data into the system using rotating magnetic fields. We use task-independent metrics to assess the rings' computational capabilities at each set of these hyperparameters and show how these metrics correlate directly to performance in spoken and written digit recognition tasks. We then show that these metrics, and performance in tasks, can be further improved by expanding the reservoir's output to include multiple, concurrent measures of the ring arrays magnetic states

    Dynamically‐driven emergence in a nanomagnetic system

    Get PDF
    Emergent behaviors occur when simple interactions between a system's constituent elements produce properties that the individual elements do not exhibit in isolation. This article reports tunable emergent behaviors observed in domain wall (DW) populations of arrays of interconnected magnetic ring‐shaped nanowires under an applied rotating magnetic field. DWs interact stochastically at ring junctions to create mechanisms of DW population loss and gain. These combine to give a dynamic, field‐dependent equilibrium DW population that is a robust and emergent property of the array, despite highly varied local magnetic configurations. The magnetic ring arrays’ properties (e.g., non‐linear behavior, “fading memory” to changes in field, fabrication repeatability, and scalability) suggest they are an interesting candidate system for realizing reservoir computing (RC), a form of neuromorphic computing, in hardware. By way of example, simulations of ring arrays performing RC approaches 100% success in classifying spoken digits for single speakers

    Quantifying the computational capability of a nanomagnetic reservoir computing platform with emergent magnetisation dynamics

    No full text
    Devices based on arrays of interconnected magnetic nano-rings with emergent magnetization dynamics have recently been proposed for use in reservoir computing applications, but for them to be computationally useful it must be possible to optimise their dynamical responses. Here, we use a phenomenological model to demonstrate that such reservoirs can be optimised for classification tasks by tuning hyperparameters that control the scaling and input-rate of data into the system using rotating magnetic fields. We use task-independent metrics to assess the rings' computational capabilities at each set of these hyperparameters and show how these metrics correlate directly to performance in spoken and written digit recognition tasks. We then show that these metrics, and performance in tasks, can be further improved by expanding the reservoir's output to include multiple, concurrent measures of the ring arrays' magnetic states
    corecore