14 research outputs found
From Vertices to Vortices in magnetic nanoislands
Recent studies in magnetic nanolithography show that a variety of complex
magnetic states emerge as a function of a single magnetic island's aspect
ratio. We propose a model which, in addition to fitting experiments, predicts
magnetic states with continuous symmetry at particular aspect ratios and
reveals a duality between vortex and vertex states. Our model then opens new
means of engineering novel types of artificial spin systems, and their
application to complex magnetic textures in devices and computing.Comment: 3 pages + epsilon + 18 supplementary materia
Spectral fingerprinting: microstate readout via remanence ferromagnetic resonance in artificial spin ice
Artificial spin ices (ASIs) are magnetic metamaterials comprising geometrically tiled strongly-interacting nanomagnets. There is significant interest in these systems spanning the fundamental physics of many-body systems to potential applications in neuromorphic computation, logic, and recently reconfigurable magnonics. Magnonics focused studies on ASI have to date have focused on the in-field GHz spin-wave response, convoluting effects from applied field, nanofabrication imperfections (‘quenched disorder’) and microstate-dependent dipolar field landscapes. Here, we investigate zero-field measurements of the spin-wave response and demonstrate its ability to provide a ‘spectral fingerprint’ of the system microstate. Removing applied field allows deconvolution of distinct contributions to reversal dynamics from the spin-wave spectra, directly measuring dipolar field strength and quenched disorder as well as net magnetisation. We demonstrate the efficacy and sensitivity of this approach by measuring ASI in three microstates with identical (zero) magnetisation, indistinguishable via magnetometry. The zero-field spin-wave response provides distinct spectral fingerprints of each state, allowing rapid, scaleable microstate readout. As artificial spin systems progress toward device implementation, zero-field functionality is crucial to minimize the power consumption associated with electromagnets. Several proposed hardware neuromorphic computation schemes hinge on leveraging dynamic measurement of ASI microstates to perform computation for which spectral fingerprinting provides a potential solution
Reconfigurable Training and Reservoir Computing in an Artificial Spin-Vortex Ice via Spin-Wave Fingerprinting
Strongly-interacting artificial spin systems are moving beyond mimicking
naturally-occurring materials to emerge as versatile functional platforms, from
reconfigurable magnonics to neuromorphic computing. Typically artificial spin
systems comprise nanomagnets with a single magnetisation texture: collinear
macrospins or chiral vortices. By tuning nanoarray dimensions we achieve
macrospin/vortex bistability and demonstrate a four-state metamaterial
spin-system 'Artificial Spin-Vortex Ice' (ASVI). ASVI can host Ising-like
macrospins with strong ice-like vertex interactions, and weakly-coupled
vortices with low stray dipolar-field. Vortices and macrospins exhibit
starkly-differing spin-wave spectra with analogue-style mode-amplitude control
and mode-frequency shifts of df = 3.8 GHz.
The enhanced bi-textural microstate space gives rise to emergent physical
memory phenomena, with ratchet-like vortex training and history-dependent
nonlinear fading memory when driven through global field cycles. We employ
spin-wave microstate fingerprinting for rapid, scaleable readout of vortex and
macrospin populations and leverage this for spin-wave reservoir computation.
ASVI performs linear and non-linear mapping transformations of diverse input
signals as well as chaotic time-series forecasting. Energy costs of machine
learning are spiralling unsustainably, developing low-energy neuromorphic
computation hardware such as ASVI is crucial to achieving a zero-carbon
computational future
Optimising network interactions through device agnostic models
Physically implemented neural networks hold the potential to achieve the
performance of deep learning models by exploiting the innate physical
properties of devices as computational tools. This exploration of physical
processes for computation requires to also consider their intrinsic dynamics,
which can serve as valuable resources to process information. However, existing
computational methods are unable to extend the success of deep learning
techniques to parameters influencing device dynamics, which often lack a
precise mathematical description. In this work, we formulate a universal
framework to optimise interactions with dynamic physical systems in a fully
data-driven fashion. The framework adopts neural stochastic differential
equations as differentiable digital twins, effectively capturing both
deterministic and stochastic behaviours of devices. Employing differentiation
through the trained models provides the essential mathematical estimates for
optimizing a physical neural network, harnessing the intrinsic temporal
computation abilities of its physical nodes. To accurately model real devices'
behaviours, we formulated neural-SDE variants that can operate under a variety
of experimental settings. Our work demonstrates the framework's applicability
through simulations and physical implementations of interacting dynamic
devices, while highlighting the importance of accurately capturing system
stochasticity for the successful deployment of a physically defined neural
network
Task-adaptive physical reservoir computing
Reservoir computing is a neuromorphic architecture that may offer viable solutions to the growing energy costs of machine learning. In software-based machine learning, computing performance can be readily reconfigured to suit different computational tasks by tuning hyperparameters. This critical functionality is missing in 'physical' reservoir computing schemes that exploit nonlinear and history-dependent responses of physical systems for data processing. Here we overcome this issue with a 'task-adaptive' approach to physical reservoir computing. By leveraging a thermodynamical phase space to reconfigure key reservoir properties, we optimize computational performance across a diverse task set. We use the spin-wave spectra of the chiral magnet Cu2OSeO3 that hosts skyrmion, conical and helical magnetic phases, providing on-demand access to different computational reservoir responses. The task-adaptive approach is applicable to a wide variety of physical systems, which we show in other chiral magnets via above (and near) room-temperature demonstrations in Co8.5Zn8.5Mn3 (and FeGe)
Task-adaptive physical reservoir computing
Reservoir computing is a neuromorphic architecture that may offer viable solutions to the growing energy costs of machine learning. In software-based machine learning, computing performance can be readily reconfigured to suit different computational tasks by tuning hyperparameters. This critical functionality is missing in ‘physical’ reservoir computing schemes that exploit nonlinear and history-dependent responses of physical systems for data processing. Here we overcome this issue with a ‘task-adaptive’ approach to physical reservoir computing. By leveraging a thermodynamical phase space to reconfigure key reservoir properties, we optimize computational performance across a diverse task set. We use the spin-wave spectra of the chiral magnet Cu2OSeO3 that hosts skyrmion, conical and helical magnetic phases, providing on-demand access to different computational reservoir responses. The task-adaptive approach is applicable to a wide variety of physical systems, which we show in other chiral magnets via above (and near) room-temperature demonstrations in Co8.5Zn8.5Mn3 (and FeGe)
Neuromorphic Few-Shot Learning: Generalization in Multilayer Physical Neural Networks
Neuromorphic computing leverages the complex dynamics of physical systems for
computation. The field has recently undergone an explosion in the range and
sophistication of implementations, with rapidly improving performance.
Neuromorphic schemes typically employ a single physical system, limiting the
dimensionality and range of available dynamics - restricting strong performance
to a few specific tasks. This is a critical roadblock facing the field,
inhibiting the power and versatility of neuromorphic schemes.
Here, we present a solution. We engineer a diverse suite of nanomagnetic
arrays and show how tuning microstate space and geometry enables a broad range
of dynamics and computing performance. We interconnect arrays in parallel,
series and multilayered neural network architectures, where each network node
is a distinct physical system. This networked approach grants extremely high
dimensionality and enriched dynamics enabling meta-learning to be implemented
on small training sets and exhibiting strong performance across a broad
taskset. We showcase network performance via few-shot learning, rapidly
adapting on-the-fly to previously unseen tasks
Task-adaptive physical reservoir computing
Reservoir computing is a neuromorphic architecture that potentially offers
viable solutions to the growing energy costs of machine learning. In
software-based machine learning, neural network properties and performance can
be readily reconfigured to suit different computational tasks by changing
hyperparameters. This critical functionality is missing in ``physical"
reservoir computing schemes that exploit nonlinear and history-dependent memory
responses of physical systems for data processing. Here, we experimentally
present a `task-adaptive' approach to physical reservoir computing, capable of
reconfiguring key reservoir properties (nonlinearity, memory-capacity and
complexity) to optimise computational performance across a broad range of
tasks. As a model case of this, we use the temperature and magnetic-field
controlled spin-wave response of CuOSeO that hosts skyrmion, conical
and helical magnetic phases, providing on-demand access to a host of different
physical reservoir responses. We quantify phase-tunable reservoir performance,
characterise their properties and discuss the correlation between these in
physical reservoirs. This task-adaptive approach overcomes key prior
limitations of physical reservoirs, opening opportunities to apply
thermodynamically stable and metastable phase control across a wide variety of
physical reservoir systems, as we show its transferable nature using
above(near)-room-temperature demonstration with CoZnMn
(FeGe).Comment: Main manuscript: 14 pages, 5 figures. Supplementary materials: 13
pages, 10 figure
Ultrastrong Magnon-Magnon Coupling and Chiral Symmetry Breaking in a 3D Magnonic Metamaterial
Strongly-interacting nanomagnetic arrays are ideal systems for exploring the
frontiers of magnonic control. They provide functional reconfigurable platforms
and attractive technological solutions across storage, GHz communications and
neuromorphic computing. Typically, these systems are primarily constrained by
their range of accessible states and the strength of magnon coupling phenomena.
Increasingly, magnetic nanostructures have explored the benefits of expanding
into three dimensions. This has broadened the horizons of magnetic microstate
spaces and functional behaviours, but precise control of 3D states and dynamics
remains challenging.
Here, we introduce a 3D magnonic metamaterial, compatible with
widely-available fabrication and characterisation techniques. By combining
independently-programmable artificial spin-systems strongly coupled in the
z-plane, we construct a reconfigurable 3D metamaterial with an exceptionally
high 16N microstate space and intense static and dynamic magnetic coupling. The
system exhibits a broad range of emergent phenomena including ultrastrong
magnon-magnon coupling with normalised coupling rates of and magnon-magnon cooperativity up to C = 126.4, GHz
mode shifts in zero applied field and chirality-selective magneto-toroidal
microstate programming and corresponding magnonic spectral control