144 research outputs found
A Survey on Reservoir Computing and its Interdisciplinary Applications Beyond Traditional Machine Learning
Reservoir computing (RC), first applied to temporal signal processing, is a
recurrent neural network in which neurons are randomly connected. Once
initialized, the connection strengths remain unchanged. Such a simple structure
turns RC into a non-linear dynamical system that maps low-dimensional inputs
into a high-dimensional space. The model's rich dynamics, linear separability,
and memory capacity then enable a simple linear readout to generate adequate
responses for various applications. RC spans areas far beyond machine learning,
since it has been shown that the complex dynamics can be realized in various
physical hardware implementations and biological devices. This yields greater
flexibility and shorter computation time. Moreover, the neuronal responses
triggered by the model's dynamics shed light on understanding brain mechanisms
that also exploit similar dynamical processes. While the literature on RC is
vast and fragmented, here we conduct a unified review of RC's recent
developments from machine learning to physics, biology, and neuroscience. We
first review the early RC models, and then survey the state-of-the-art models
and their applications. We further introduce studies on modeling the brain's
mechanisms by RC. Finally, we offer new perspectives on RC development,
including reservoir design, coding frameworks unification, physical RC
implementations, and interaction between RC, cognitive neuroscience and
evolution.Comment: 51 pages, 19 figures, IEEE Acces
Simulation Intelligence: Towards a New Generation of Scientific Methods
The original "Seven Motifs" set forth a roadmap of essential methods for the
field of scientific computing, where a motif is an algorithmic method that
captures a pattern of computation and data movement. We present the "Nine
Motifs of Simulation Intelligence", a roadmap for the development and
integration of the essential algorithms necessary for a merger of scientific
computing, scientific simulation, and artificial intelligence. We call this
merger simulation intelligence (SI), for short. We argue the motifs of
simulation intelligence are interconnected and interdependent, much like the
components within the layers of an operating system. Using this metaphor, we
explore the nature of each layer of the simulation intelligence operating
system stack (SI-stack) and the motifs therein: (1) Multi-physics and
multi-scale modeling; (2) Surrogate modeling and emulation; (3)
Simulation-based inference; (4) Causal modeling and inference; (5) Agent-based
modeling; (6) Probabilistic programming; (7) Differentiable programming; (8)
Open-ended optimization; (9) Machine programming. We believe coordinated
efforts between motifs offers immense opportunity to accelerate scientific
discovery, from solving inverse problems in synthetic biology and climate
science, to directing nuclear energy experiments and predicting emergent
behavior in socioeconomic settings. We elaborate on each layer of the SI-stack,
detailing the state-of-art methods, presenting examples to highlight challenges
and opportunities, and advocating for specific ways to advance the motifs and
the synergies from their combinations. Advancing and integrating these
technologies can enable a robust and efficient hypothesis-simulation-analysis
type of scientific method, which we introduce with several use-cases for
human-machine teaming and automated science
Dynamical Systems in Spiking Neuromorphic Hardware
Dynamical systems are universal computers. They can perceive stimuli, remember, learn from feedback, plan sequences of actions, and coordinate complex behavioural responses. The Neural Engineering Framework (NEF) provides a general recipe to formulate models of such systems as coupled sets of nonlinear differential equations and compile them onto recurrently connected spiking neural networks – akin to a programming language for spiking models of computation. The Nengo software ecosystem supports the NEF and compiles such models onto neuromorphic hardware. In this thesis, we analyze the theory driving the success of the NEF, and expose several core principles underpinning its correctness, scalability, completeness, robustness, and extensibility. We also derive novel theoretical extensions to the framework that enable it to far more effectively leverage a wide variety of dynamics in digital hardware, and to exploit the device-level physics in analog hardware. At the same time, we propose a novel set of spiking algorithms that recruit an optimal nonlinear encoding of time, which we call the Delay Network (DN). Backpropagation across stacked layers of DNs dramatically outperforms stacked Long Short-Term Memory (LSTM) networks—a state-of-the-art deep recurrent architecture—in accuracy and training time, on a continuous-time memory task, and a chaotic time-series prediction benchmark. The basic component of this network is shown to function on state-of-the-art spiking neuromorphic hardware including Braindrop and Loihi. This implementation approaches the energy-efficiency of the human brain in the former case, and the precision of conventional computation in the latter case
Modern applications of machine learning in quantum sciences
In these Lecture Notes, we provide a comprehensive introduction to the most recent advances in the application of machine learning methods in quantum sciences. We cover the use of deep learning and kernel methods in supervised, unsupervised, and reinforcement learning algorithms for phase classification, representation of many-body quantum states, quantum feedback control, and quantum circuits optimization. Moreover, we introduce and discuss more specialized topics such as differentiable programming, generative models, statistical approach to machine learning, and quantum machine learning
Modern applications of machine learning in quantum sciences
In these Lecture Notes, we provide a comprehensive introduction to the most
recent advances in the application of machine learning methods in quantum
sciences. We cover the use of deep learning and kernel methods in supervised,
unsupervised, and reinforcement learning algorithms for phase classification,
representation of many-body quantum states, quantum feedback control, and
quantum circuits optimization. Moreover, we introduce and discuss more
specialized topics such as differentiable programming, generative models,
statistical approach to machine learning, and quantum machine learning.Comment: 268 pages, 87 figures. Comments and feedback are very welcome.
Figures and tex files are available at
https://github.com/Shmoo137/Lecture-Note
Compensação digital de distorções da fibra em sistemas de comunicação óticos de longa distância
The continuous increase of traffic demand in long-haul communications motivated
the network operators to look for receiver side techniques to mitigate the nonlinear
effects, resulting from signal-signal and signal-noise interaction, thus pushing the
current Capacity boundaries. Machine learning techniques are a very hot-topic
with given proofs in the most diverse applications. This dissertation aims to study
nonlinear impairments in long-haul coherent optical links and the current state of
the art in DSP techniques for impairment mitigation as well as the integration of
machine learning strategies in optical networks. Starting with a simplified fiber
model only impaired by ASE noise, we studied how to integrate an ANN-based
symbol estimator into the signal pipeline, enabling to validate the implementation
by matching the theoretical performance. We then moved to nonlinear proof of
concept with the incorporation of NLPN in the fiber link. Finally, we evaluated
the performance of the estimator under realistic simulations of Single and Multi-
Channel links in both SSFM and NZDSF fibers. The obtained results indicate
that even though it may be hard to find the best architecture, Nonlinear Symbol
Estimator networks have the potential to surpass more conventional DSP strategies.O aumento contínuo de tráfego nas comunicações de longo-alcance motivou os
operadores de rede a procurar técnicas do lado do receptor para atenuar os efeitos
não lineares resultantes da interacção sinal-sinal e sinal-ruído, alargando assim os
limites da capacidade do sistema. As técnicas de aprendizagem-máquina são um
tópico em ascenção com provas dadas nas mais diversas aplicações e setores. Esta
dissertação visa estudar as principais deficiências nas ligações de longo curso e o
actual estado da arte em técnicas de DSP para mitigação das mesmas, bem como
a integração de estratégias de aprendizagem-máquina em redes ópticas. Começando
com um modelo simplificado de fibra apenas perturbado pelo ruído ASE,
estudámos como integrar um estimador de símbolos baseado em ANN na cadeia
do prodessamento de sinal, conseguindo igualar o desempenho teórico. Procedemos
com uma prova de conceito perante não linearidades com a incorporação do
ruído de fase não linear na propagação. Finalmente, avaliamos o desempenho do
estimador com simulações realistas de links Single e Multi canal tanto em fibras
SSFM como NZDSF. Os resultados obtidos indicam que apesar da dificuldade de
encontrar a melhor arquitectura, a estimação não linear baseada em redes neuronais
têm o potencial para ultrapassar estratégias DSP mais convencionais.Mestrado em Engenharia Eletrónica e Telecomunicaçõe
Understanding Quantum Technologies 2022
Understanding Quantum Technologies 2022 is a creative-commons ebook that
provides a unique 360 degrees overview of quantum technologies from science and
technology to geopolitical and societal issues. It covers quantum physics
history, quantum physics 101, gate-based quantum computing, quantum computing
engineering (including quantum error corrections and quantum computing
energetics), quantum computing hardware (all qubit types, including quantum
annealing and quantum simulation paradigms, history, science, research,
implementation and vendors), quantum enabling technologies (cryogenics, control
electronics, photonics, components fabs, raw materials), quantum computing
algorithms, software development tools and use cases, unconventional computing
(potential alternatives to quantum and classical computing), quantum
telecommunications and cryptography, quantum sensing, quantum technologies
around the world, quantum technologies societal impact and even quantum fake
sciences. The main audience are computer science engineers, developers and IT
specialists as well as quantum scientists and students who want to acquire a
global view of how quantum technologies work, and particularly quantum
computing. This version is an extensive update to the 2021 edition published in
October 2021.Comment: 1132 pages, 920 figures, Letter forma
- …