263 research outputs found
Analog Photonics Computing for Information Processing, Inference and Optimisation
This review presents an overview of the current state-of-the-art in photonics
computing, which leverages photons, photons coupled with matter, and
optics-related technologies for effective and efficient computational purposes.
It covers the history and development of photonics computing and modern
analogue computing platforms and architectures, focusing on optimization tasks
and neural network implementations. The authors examine special-purpose
optimizers, mathematical descriptions of photonics optimizers, and their
various interconnections. Disparate applications are discussed, including
direct encoding, logistics, finance, phase retrieval, machine learning, neural
networks, probabilistic graphical models, and image processing, among many
others. The main directions of technological advancement and associated
challenges in photonics computing are explored, along with an assessment of its
efficiency. Finally, the paper discusses prospects and the field of optical
quantum computing, providing insights into the potential applications of this
technology.Comment: Invited submission by Journal of Advanced Quantum Technologies;
accepted version 5/06/202
Global exponential periodicity of nonlinear neural networks with multiple time-varying delays
Global exponential periodicity of nonlinear neural networks with multiple time-varying delays is investigated. Such neural networks cannot be written in the vector-matrix form because of the existence of the multiple delays. It is noted that although the neural network with multiple time-varying delays has been investigated by Lyapunov-Krasovskii functional method in the literature, the sufficient conditions in the linear matrix inequality form have not been obtained. Two sets of sufficient conditions in the linear matrix inequality form are established by Lyapunov-Krasovskii functional and linear matrix inequality to ensure that two arbitrary solutions of the neural network with multiple delays attract each other exponentially. This is a key prerequisite to prove the existence, uniqueness, and global exponential stability of periodic solutions. Some examples are provided to demonstrate the effectiveness of the established results. We compare the established theoretical results with the previous results and show that the previous results are not applicable to the systems in these examples
Piecewise pseudo almost periodic solutions of interval general BAM neural networks with mixed time-varying delays and impulsive perturbations
This paper is concerned with piecewise pseudo almost periodic solutions of a class of interval general BAM neural networks with mixed time-varying delays and impulsive perturbations. By adopting the exponential dichotomy of linear differential equations and the fixed point theory of contraction mapping. The sufficient conditions for the existence of piecewise pseudo almost periodic solutions of the interval general BAM neural networks with mixed time-varying delays and impulsive perturbations are obtained. By adopting differential inequality techniques and mathematical methods of induction, the global exponential stability for the piecewise pseudo almost periodic solutions of the interval general BAM neural networks with mixed time-varying delays and impulsive perturbations is discussed. An example is given to illustrate the effectiveness of the results obtained in the paper
Global Mittag-Leffler stability of Caputo fractional-order fuzzy inertial neural networks with delay
This paper deals with the global Mittag-Leffler stability (GMLS) of Caputo fractional-order fuzzy inertial neural networks with time delay (CFOFINND). Based on Lyapunov stability theory and global fractional Halanay inequalities, the existence of unique equilibrium point and GMLS of CFOFINND have been established. A numerical example is given to illustrate the effectiveness of our results
Brain Computations and Connectivity [2nd edition]
This is an open access title available under the terms of a CC BY-NC-ND 4.0 International licence. It is free to read on the Oxford Academic platform and offered as a free PDF download from OUP and selected open access locations.
Brain Computations and Connectivity is about how the brain works. In order to understand this, it is essential to know what is computed by different brain systems; and how the computations are performed.
The aim of this book is to elucidate what is computed in different brain systems; and to describe current biologically plausible computational approaches and models of how each of these brain systems computes.
Understanding the brain in this way has enormous potential for understanding ourselves better in health and in disease. Potential applications of this understanding are to the treatment of the brain in disease; and to artificial intelligence which will benefit from knowledge of how the brain performs many of its extraordinarily impressive functions.
This book is pioneering in taking this approach to brain function: to consider what is computed by many of our brain systems; and how it is computed, and updates by much new evidence including the connectivity of the human brain the earlier book: Rolls (2021) Brain Computations: What and How, Oxford University Press.
Brain Computations and Connectivity will be of interest to all scientists interested in brain function and how the brain works, whether they are from neuroscience, or from medical sciences including neurology and psychiatry, or from the area of computational science including machine learning and artificial intelligence, or from areas such as theoretical physics
Towards NeuroAI: Introducing Neuronal Diversity into Artificial Neural Networks
Throughout history, the development of artificial intelligence, particularly
artificial neural networks, has been open to and constantly inspired by the
increasingly deepened understanding of the brain, such as the inspiration of
neocognitron, which is the pioneering work of convolutional neural networks.
Per the motives of the emerging field: NeuroAI, a great amount of neuroscience
knowledge can help catalyze the next generation of AI by endowing a network
with more powerful capabilities. As we know, the human brain has numerous
morphologically and functionally different neurons, while artificial neural
networks are almost exclusively built on a single neuron type. In the human
brain, neuronal diversity is an enabling factor for all kinds of biological
intelligent behaviors. Since an artificial network is a miniature of the human
brain, introducing neuronal diversity should be valuable in terms of addressing
those essential problems of artificial networks such as efficiency,
interpretability, and memory. In this Primer, we first discuss the
preliminaries of biological neuronal diversity and the characteristics of
information transmission and processing in a biological neuron. Then, we review
studies of designing new neurons for artificial networks. Next, we discuss what
gains can neuronal diversity bring into artificial networks and exemplary
applications in several important fields. Lastly, we discuss the challenges and
future directions of neuronal diversity to explore the potential of NeuroAI
Nonlinear Systems
Open Mathematics is a challenging notion for theoretical modeling, technical analysis, and numerical simulation in physics and mathematics, as well as in many other fields, as highly correlated nonlinear phenomena, evolving over a large range of time scales and length scales, control the underlying systems and processes in their spatiotemporal evolution. Indeed, available data, be they physical, biological, or financial, and technologically complex systems and stochastic systems, such as mechanical or electronic devices, can be managed from the same conceptual approach, both analytically and through computer simulation, using effective nonlinear dynamics methods. The aim of this Special Issue is to highlight papers that show the dynamics, control, optimization and applications of nonlinear systems. This has recently become an increasingly popular subject, with impressive growth concerning applications in engineering, economics, biology, and medicine, and can be considered a veritable contribution to the literature. Original papers relating to the objective presented above are especially welcome subjects. Potential topics include, but are not limited to: Stability analysis of discrete and continuous dynamical systems; Nonlinear dynamics in biological complex systems; Stability and stabilization of stochastic systems; Mathematical models in statistics and probability; Synchronization of oscillators and chaotic systems; Optimization methods of complex systems; Reliability modeling and system optimization; Computation and control over networked systems
Deep learning applied to computational mechanics: A comprehensive review, state of the art, and the classics
Three recent breakthroughs due to AI in arts and science serve as motivation:
An award winning digital image, protein folding, fast matrix multiplication.
Many recent developments in artificial neural networks, particularly deep
learning (DL), applied and relevant to computational mechanics (solid, fluids,
finite-element technology) are reviewed in detail. Both hybrid and pure machine
learning (ML) methods are discussed. Hybrid methods combine traditional PDE
discretizations with ML methods either (1) to help model complex nonlinear
constitutive relations, (2) to nonlinearly reduce the model order for efficient
simulation (turbulence), or (3) to accelerate the simulation by predicting
certain components in the traditional integration methods. Here, methods (1)
and (2) relied on Long-Short-Term Memory (LSTM) architecture, with method (3)
relying on convolutional neural networks. Pure ML methods to solve (nonlinear)
PDEs are represented by Physics-Informed Neural network (PINN) methods, which
could be combined with attention mechanism to address discontinuous solutions.
Both LSTM and attention architectures, together with modern and generalized
classic optimizers to include stochasticity for DL networks, are extensively
reviewed. Kernel machines, including Gaussian processes, are provided to
sufficient depth for more advanced works such as shallow networks with infinite
width. Not only addressing experts, readers are assumed familiar with
computational mechanics, but not with DL, whose concepts and applications are
built up from the basics, aiming at bringing first-time learners quickly to the
forefront of research. History and limitations of AI are recounted and
discussed, with particular attention at pointing out misstatements or
misconceptions of the classics, even in well-known references. Positioning and
pointing control of a large-deformable beam is given as an example.Comment: 275 pages, 158 figures. Appeared online on 2023.03.01 at
CMES-Computer Modeling in Engineering & Science
Connectome-Constrained Artificial Neural Networks
In biological neural networks (BNNs), structure provides a set of guard rails by which function is constrained to solve tasks effectively, handle multiple stimuli simultaneously, adapt to noise and input variations, and preserve energy expenditure. Such features are desirable for artificial neural networks (ANNs), which are, unlike their organic counterparts, practically unbounded, and in many cases, initialized with random weights or arbitrary structural elements. In this dissertation, we consider an inductive base case for imposing BNN constraints onto ANNs. We select explicit connectome topologies from the fruit fly (one of the smallest BNNs) and impose these onto a multilayer perceptron (MLP) and a reservoir computer (RC), in order to craft “fruit fly neural networks” (FFNNs). We study the impact on performance, variance, and prediction dynamics from using FFNNs compared to non-FFNN models on odour classification, chaotic time-series prediction, and multifunctionality tasks. From a series of four experimental studies, we observe that the fly olfactory brain is aligned towards recalling and making predictions from chaotic input data, with a capacity for executing two mutually exclusive tasks from distinct initial conditions, and with low sensitivity to hyperparameter fluctuations that can lead to chaotic behaviour. We also observe that the clustering coefficient of the fly network, and its particular non-zero weight positions, are important for reducing model variance. These findings suggest that BNNs have distinct advantages over arbitrarily-weighted ANNs; notably, from their structure alone. More work with connectomes drawn across species will be useful in finding shared topological features which can further enhance ANNs, and Machine Learning overall
Finite-time decentralized event-triggered feedback control for generalized neural networks with mixed interval time-varying delays and cyber-attacks
This article investigates the finite-time decentralized event-triggered feedback control problem for generalized neural networks (GNNs) with mixed interval time-varying delays and cyber-attacks. A decentralized event-triggered method reduces the network transmission load and decides whether sensor measurements should be sent out. The cyber-attacks that occur at random are described employing Bernoulli distributed variables. By the Lyapunov-Krasovskii stability theory, we apply an integral inequality with an exponential function to estimate the derivative of the Lyapunov-Krasovskii functionals (LKFs). We present new sufficient conditions in the form of linear matrix inequalities. The main objective of this research is to investigate the stochastic finite-time boundedness of GNNs with mixed interval time-varying delays and cyber-attacks by providing a decentralized event-triggered method and feedback controller. Finally, a numerical example is constructed to demonstrate the effectiveness and advantages of the provided control scheme
- …