3,319 research outputs found
Nearly extensive sequential memory lifetime achieved by coupled nonlinear neurons
Many cognitive processes rely on the ability of the brain to hold sequences
of events in short-term memory. Recent studies have revealed that such memory
can be read out from the transient dynamics of a network of neurons. However,
the memory performance of such a network in buffering past information has only
been rigorously estimated in networks of linear neurons. When signal gain is
kept low, so that neurons operate primarily in the linear part of their
response nonlinearity, the memory lifetime is bounded by the square root of the
network size. In this work, I demonstrate that it is possible to achieve a
memory lifetime almost proportional to the network size, "an extensive memory
lifetime", when the nonlinearity of neurons is appropriately utilized. The
analysis of neural activity revealed that nonlinear dynamics prevented the
accumulation of noise by partially removing noise in each time step. With this
error-correcting mechanism, I demonstrate that a memory lifetime of order
can be achieved.Comment: 21 pages, 5 figures, the manuscript has been accepted for publication
in Neural Computatio
A Survey on Reservoir Computing and its Interdisciplinary Applications Beyond Traditional Machine Learning
Reservoir computing (RC), first applied to temporal signal processing, is a
recurrent neural network in which neurons are randomly connected. Once
initialized, the connection strengths remain unchanged. Such a simple structure
turns RC into a non-linear dynamical system that maps low-dimensional inputs
into a high-dimensional space. The model's rich dynamics, linear separability,
and memory capacity then enable a simple linear readout to generate adequate
responses for various applications. RC spans areas far beyond machine learning,
since it has been shown that the complex dynamics can be realized in various
physical hardware implementations and biological devices. This yields greater
flexibility and shorter computation time. Moreover, the neuronal responses
triggered by the model's dynamics shed light on understanding brain mechanisms
that also exploit similar dynamical processes. While the literature on RC is
vast and fragmented, here we conduct a unified review of RC's recent
developments from machine learning to physics, biology, and neuroscience. We
first review the early RC models, and then survey the state-of-the-art models
and their applications. We further introduce studies on modeling the brain's
mechanisms by RC. Finally, we offer new perspectives on RC development,
including reservoir design, coding frameworks unification, physical RC
implementations, and interaction between RC, cognitive neuroscience and
evolution.Comment: 51 pages, 19 figures, IEEE Acces
Research and Education in Computational Science and Engineering
Over the past two decades the field of computational science and engineering
(CSE) has penetrated both basic and applied research in academia, industry, and
laboratories to advance discovery, optimize systems, support decision-makers,
and educate the scientific and engineering workforce. Informed by centuries of
theory and experiment, CSE performs computational experiments to answer
questions that neither theory nor experiment alone is equipped to answer. CSE
provides scientists and engineers of all persuasions with algorithmic
inventions and software systems that transcend disciplines and scales. Carried
on a wave of digital technology, CSE brings the power of parallelism to bear on
troves of data. Mathematics-based advanced computing has become a prevalent
means of discovery and innovation in essentially all areas of science,
engineering, technology, and society; and the CSE community is at the core of
this transformation. However, a combination of disruptive
developments---including the architectural complexity of extreme-scale
computing, the data revolution that engulfs the planet, and the specialization
required to follow the applications to new frontiers---is redefining the scope
and reach of the CSE endeavor. This report describes the rapid expansion of CSE
and the challenges to sustaining its bold advances. The report also presents
strategies and directions for CSE research and education for the next decade.Comment: Major revision, to appear in SIAM Revie
Memory formation in matter
Memory formation in matter is a theme of broad intellectual relevance; it
sits at the interdisciplinary crossroads of physics, biology, chemistry, and
computer science. Memory connotes the ability to encode, access, and erase
signatures of past history in the state of a system. Once the system has
completely relaxed to thermal equilibrium, it is no longer able to recall
aspects of its evolution. Memory of initial conditions or previous training
protocols will be lost. Thus many forms of memory are intrinsically tied to
far-from-equilibrium behavior and to transient response to a perturbation. This
general behavior arises in diverse contexts in condensed matter physics and
materials: phase change memory, shape memory, echoes, memory effects in
glasses, return-point memory in disordered magnets, as well as related contexts
in computer science. Yet, as opposed to the situation in biology, there is
currently no common categorization and description of the memory behavior that
appears to be prevalent throughout condensed-matter systems. Here we focus on
material memories. We will describe the basic phenomenology of a few of the
known behaviors that can be understood as constituting a memory. We hope that
this will be a guide towards developing the unifying conceptual underpinnings
for a broad understanding of memory effects that appear in materials
Hyper-Learning with Deep Artificial Neurons
Two problems have plagued artificial neural networks since their birth in the mid-20th century. The first is a tendency to lose previously acquired knowledge when there is a large shift in the underlying data-distribution, a phenomenon provocatively known as catastrophic forgetting. The second is an inability to know-what-they-don’t-know, resulting in excessively confident behavior, even in uncertain or novel conditions. This text provides an in-depth history of these obstacles, complete with formal problem definitions and literature reviews. Most importantly, the proposed solutions herein demonstrate that these challenges can be overcome with the right architectures and training objectives. As this text will show, a thorough investigation of these topics necessitated several distinct approaches. Each of which, when considered in isolation, offers evidence that these problems are likely temporary obstacles on the path to true human-level intelligence. Lastly, we present a new learning framework called Hyper-Learning, which might allow both of these problems to be mitigated by a single architecture when coupled with the right training algorithm
Analog Photonics Computing for Information Processing, Inference and Optimisation
This review presents an overview of the current state-of-the-art in photonics
computing, which leverages photons, photons coupled with matter, and
optics-related technologies for effective and efficient computational purposes.
It covers the history and development of photonics computing and modern
analogue computing platforms and architectures, focusing on optimization tasks
and neural network implementations. The authors examine special-purpose
optimizers, mathematical descriptions of photonics optimizers, and their
various interconnections. Disparate applications are discussed, including
direct encoding, logistics, finance, phase retrieval, machine learning, neural
networks, probabilistic graphical models, and image processing, among many
others. The main directions of technological advancement and associated
challenges in photonics computing are explored, along with an assessment of its
efficiency. Finally, the paper discusses prospects and the field of optical
quantum computing, providing insights into the potential applications of this
technology.Comment: Invited submission by Journal of Advanced Quantum Technologies;
accepted version 5/06/202
Fast fluorescence lifetime imaging and sensing via deep learning
Error on title page – year of award is 2023.Fluorescence lifetime imaging microscopy (FLIM) has become a valuable tool in diverse disciplines. This thesis presents deep learning (DL) approaches to addressing two major challenges in FLIM: slow and complex data analysis and the high photon budget for precisely quantifying the fluorescence lifetimes. DL's ability to extract high-dimensional features from data has revolutionized optical and biomedical imaging analysis. This thesis contributes several novel DL FLIM algorithms that significantly expand FLIM's scope.
Firstly, a hardware-friendly pixel-wise DL algorithm is proposed for fast FLIM data analysis. The algorithm has a simple architecture yet can effectively resolve multi-exponential decay models. The calculation speed and accuracy outperform conventional methods significantly.
Secondly, a DL algorithm is proposed to improve FLIM image spatial resolution, obtaining high-resolution (HR) fluorescence lifetime images from low-resolution (LR) images. A computational framework is developed to generate large-scale semi-synthetic FLIM datasets to address the challenge of the lack of sufficient high-quality FLIM datasets. This algorithm offers a practical approach to obtaining HR FLIM images quickly for FLIM systems.
Thirdly, a DL algorithm is developed to analyze FLIM images with only a few photons per pixel, named Few-Photon Fluorescence Lifetime Imaging (FPFLI) algorithm. FPFLI uses spatial correlation and intensity information to robustly estimate the fluorescence lifetime images, pushing this photon budget to a record-low level of only a few photons per pixel.
Finally, a time-resolved flow cytometry (TRFC) system is developed by integrating an advanced CMOS single-photon avalanche diode (SPAD) array and a DL processor. The SPAD array, using a parallel light detection scheme, shows an excellent photon-counting throughput. A quantized convolutional neural network (QCNN) algorithm is designed and implemented on a field-programmable gate array as an embedded processor. The processor resolves fluorescence lifetimes against disturbing noise, showing unparalleled high accuracy, fast analysis speed, and low power consumption.Fluorescence lifetime imaging microscopy (FLIM) has become a valuable tool in diverse disciplines. This thesis presents deep learning (DL) approaches to addressing two major challenges in FLIM: slow and complex data analysis and the high photon budget for precisely quantifying the fluorescence lifetimes. DL's ability to extract high-dimensional features from data has revolutionized optical and biomedical imaging analysis. This thesis contributes several novel DL FLIM algorithms that significantly expand FLIM's scope.
Firstly, a hardware-friendly pixel-wise DL algorithm is proposed for fast FLIM data analysis. The algorithm has a simple architecture yet can effectively resolve multi-exponential decay models. The calculation speed and accuracy outperform conventional methods significantly.
Secondly, a DL algorithm is proposed to improve FLIM image spatial resolution, obtaining high-resolution (HR) fluorescence lifetime images from low-resolution (LR) images. A computational framework is developed to generate large-scale semi-synthetic FLIM datasets to address the challenge of the lack of sufficient high-quality FLIM datasets. This algorithm offers a practical approach to obtaining HR FLIM images quickly for FLIM systems.
Thirdly, a DL algorithm is developed to analyze FLIM images with only a few photons per pixel, named Few-Photon Fluorescence Lifetime Imaging (FPFLI) algorithm. FPFLI uses spatial correlation and intensity information to robustly estimate the fluorescence lifetime images, pushing this photon budget to a record-low level of only a few photons per pixel.
Finally, a time-resolved flow cytometry (TRFC) system is developed by integrating an advanced CMOS single-photon avalanche diode (SPAD) array and a DL processor. The SPAD array, using a parallel light detection scheme, shows an excellent photon-counting throughput. A quantized convolutional neural network (QCNN) algorithm is designed and implemented on a field-programmable gate array as an embedded processor. The processor resolves fluorescence lifetimes against disturbing noise, showing unparalleled high accuracy, fast analysis speed, and low power consumption
Design Optimization of Wind Energy Conversion Systems with Applications
Modern and larger horizontal-axis wind turbines with power capacity reaching 15 MW and rotors of more than 235-meter diameter are under continuous development for the merit of minimizing the unit cost of energy production (total annual cost/annual energy produced). Such valuable advances in this competitive source of clean energy have made numerous research contributions in developing wind industry technologies worldwide. This book provides important information on the optimum design of wind energy conversion systems (WECS) with a comprehensive and self-contained handling of design fundamentals of wind turbines. Section I deals with optimal production of energy, multi-disciplinary optimization of wind turbines, aerodynamic and structural dynamic optimization and aeroelasticity of the rotating blades. Section II considers operational monitoring, reliability and optimal control of wind turbine components
Machine Learning-based Predictive Maintenance for Optical Networks
Optical networks provide the backbone of modern telecommunications by connecting the world faster than ever before. However, such networks are susceptible to several failures (e.g., optical fiber cuts, malfunctioning optical devices), which might result in degradation in the network operation, massive data loss, and network disruption. It is challenging to accurately and quickly detect and localize such failures due to the complexity of such networks, the time required to identify the fault and pinpoint it using conventional approaches, and the lack of proactive efficient fault management mechanisms. Therefore, it is highly beneficial to perform fault management in optical communication systems in order to reduce the mean time to repair, to meet service level agreements more easily, and to enhance the network reliability. In this thesis, the aforementioned challenges and needs are tackled by investigating the use of machine learning (ML) techniques for implementing efficient proactive fault detection, diagnosis, and localization schemes for optical communication systems. In particular, the adoption of ML methods for solving the following problems is explored: - Degradation prediction of semiconductor lasers, - Lifetime (mean time to failure) prediction of semiconductor lasers, - Remaining useful life (the length of time a machine is likely to operate before it requires repair or replacement) prediction of semiconductor lasers, - Optical fiber fault detection, localization, characterization, and identification for different optical network architectures, - Anomaly detection in optical fiber monitoring. Such ML approaches outperform the conventionally employed methods for all the investigated use cases by achieving better prediction accuracy and earlier prediction or detection capability
Advances in Reinforcement Learning
Reinforcement Learning (RL) is a very dynamic area in terms of theory and application. This book brings together many different aspects of the current research on several fields associated to RL which has been growing rapidly, producing a wide variety of learning algorithms for different applications. Based on 24 Chapters, it covers a very broad variety of topics in RL and their application in autonomous systems. A set of chapters in this book provide a general overview of RL while other chapters focus mostly on the applications of RL paradigms: Game Theory, Multi-Agent Theory, Robotic, Networking Technologies, Vehicular Navigation, Medicine and Industrial Logistic
- …