772 research outputs found
Transformations in the Scale of Behaviour and the Global Optimisation of Constraints in Adaptive Networks
The natural energy minimisation behaviour of a dynamical system can be interpreted as a simple optimisation process, finding a locally optimal resolution of problem constraints. In human problem solving, high-dimensional problems are often made much easier by inferring a low-dimensional model of the system in which search is more effective. But this is an approach that seems to require top-down domain knowledge; not one amenable to the spontaneous energy minimisation behaviour of a natural dynamical system. However, in this paper we investigate the ability of distributed dynamical systems to improve their constraint resolution ability over time by self-organisation. We use a âself-modellingâ Hopfield network with a novel type of associative connection to illustrate how slowly changing relationships between system components can result in a transformation into a new system which is a low-dimensional caricature of the original system. The energy minimisation behaviour of this new system is significantly more effective at globally resolving the original system constraints. This model uses only very simple, and fully-distributed positive feedback mechanisms that are relevant to other âactive linkingâ and adaptive networks. We discuss how this neural network model helps us to understand transformations and emergent collective behaviour in various non-neural adaptive networks such as social, genetic and ecological networks
Modeling and control of complex dynamic systems: Applied mathematical aspects
The concept of complex dynamic systems arises in many varieties, including the areas of energy generation, storage and distribution, ecosystems, gene regulation and health delivery, safety and security systems, telecommunications, transportation networks, and the rapidly emerging research topics seeking to understand and analyse. Such systems are often concurrent and distributed, because they have to react to various kinds of events, signals, and conditions. They may be characterized by a system with uncertainties, time delays, stochastic perturbations, hybrid dynamics, distributed dynamics, chaotic dynamics, and a large number of algebraic loops. This special issue provides a platform for researchers to report their recent results on various mathematical methods and techniques for modelling and control of complex dynamic systems and identifying critical issues and challenges for future investigation in this field. This special issue amazingly attracted one-hundred-and eighteen submissions, and twenty-eight of them are selected through a rigorous review procedure
Regularization, early-stopping and dreaming: a Hopfield-like setup to address generalization and overfitting
In this work we approach attractor neural networks from a machine learning
perspective: we look for optimal network parameters by applying a gradient
descent over a regularized loss function. Within this framework, the optimal
neuron-interaction matrices turn out to be a class of matrices which correspond
to Hebbian kernels revised by a reiterated unlearning protocol. Remarkably, the
extent of such unlearning is proved to be related to the regularization
hyperparameter of the loss function and to the training time. Thus, we can
design strategies to avoid overfitting that are formulated in terms of
regularization and early-stopping tuning. The generalization capabilities of
these attractor networks are also investigated: analytical results are obtained
for random synthetic datasets, next, the emerging picture is corroborated by
numerical experiments that highlight the existence of several regimes (i.e.,
overfitting, failure and success) as the dataset parameters are varied.Comment: 29 pages, 10 figures, 4 appendice
Statistical physics of neural systems
The ability of processing and storing information is considered a characteristic
trait of intelligent systems. In biological neural networks, learning is strongly
believed to take place at the synaptic level, in terms of modulation of synaptic
efficacy. It can be thus interpreted as the expression of a collective phenomena,
emerging when neurons connect each other in constituting a complex network of
interactions. In this work, we represent learning as an optimization problem, actually
implementing a local search, in the synaptic space, of specific configurations, known
as solutions and making a neural network able to accomplish a series of different
tasks. For instance, we would like the network to adapt the strength of its synaptic
connections, in order to be capable of classifying a series of objects, by assigning to
each object its corresponding class-label. Supported by a series of experiments, it
has been suggested that synapses may exploit a very few number of synaptic states
for encoding information. It is known that this feature makes learning in neural
networks a challenging task. Extending the large deviation analysis performed in
the extreme case of binary synaptic couplings, in this work, we prove the existence
of regions of the phase space, where solutions are organized in extremely dense
clusters. This picture turns out to be invariant to the tuning of all the parameters of
the model. Solutions within the clusters are more robust to noise, thus enhancing the
learning performances. This has inspired the design of new learning algorithms, as
well as it has clarified the effectiveness of the previously proposed ones. We further
provide quantitative evidence that the gain achievable when considering a greater
number of available synaptic states for encoding information, is consistent only up
to a very few number of bits. This is in line with the above mentioned experimental
results. Besides the challenging aspect of low precision synaptic connections, it is
also known that the neuronal environment is extremely noisy. Whether stochasticity
can enhance or worsen the learning performances is currently matter of debate. In
this work, we consider a neural network model where the synaptic connections are random variables, sampled according to a parametrized probability distribution.
We prove that, this source of stochasticity naturally drives towards regions of the
phase space at high densities of solutions. These regions are directly accessible by
means of gradient descent strategies, over the parameters of the synaptic couplings
distribution. We further set up a statistical physics analysis, through which we
show that solutions in the dense regions are characterized by robustness and good
generalization performances. Stochastic neural networks are also capable of building
abstract representations of input stimuli and then generating new input samples,
according to the inferred statistics of the input signal. In this regard, we propose a
new learning rule, called Delayed Correlation Matching (DCM), that relying on the
matching between time-delayed activity correlations, makes a neural network able
to store patterns of neuronal activity. When considering hidden neuronal states, the
DCM learning rule is also able to train Restricted Boltzmann Machines as generative
models. In this work, we further require the DCM learning rule to fulfil some
biological constraints, such as locality, sparseness of the neural coding and the Daleâs
principle. While retaining all these biological requirements, the DCM learning
rule has shown to be effective for different network topologies, and in both on-line
learning regimes and presence of correlated patterns. We further show that it is also
able to prevent the creation of spurious attractor states
Numerical Methods That Preserve a Lyapunov Function for Ordinary Differential Equations
The paper studies numerical methods that preserve a Lyapunov function of a dynamical system, i.e., numerical approximations whose energy decreases, just like in the original differential equation. With this aim, a discrete gradient method is implemented for the numerical integration of a system of ordinary differential equations. In principle, this procedure yields first-order methods, but the analysis paves the way for the design of higher-order methods. As a case in point, the proposed method is applied to the Duffing equation without external forcing, considering that, in this case, preserving the Lyapunov function is more important than the accuracy of particular trajectories. Results are validated by means of numerical experiments, where the discrete gradient method is compared to standard RungeâKutta methods. As predicted by the theory, discrete gradient methods preserve the Lyapunov function, whereas conventional methods fail to do so, since either periodic solutions appear or the energy does not decrease. Moreover, the discrete gradient method outperforms conventional schemes when these do preserve the Lyapunov function, in terms of computational cost; thus, the proposed method is promising.This work has been partially supported by Project PID2020-116898RB-I00 from the Ministerio de Ciencia e InnovaciĂłn of Spain and Project UMA20-FEDERJA-045 from the Programa Operativo FEDER de AndalucĂa. Partial funding for open access charge: Universidad de MĂĄlag
Analog Photonics Computing for Information Processing, Inference and Optimisation
This review presents an overview of the current state-of-the-art in photonics
computing, which leverages photons, photons coupled with matter, and
optics-related technologies for effective and efficient computational purposes.
It covers the history and development of photonics computing and modern
analogue computing platforms and architectures, focusing on optimization tasks
and neural network implementations. The authors examine special-purpose
optimizers, mathematical descriptions of photonics optimizers, and their
various interconnections. Disparate applications are discussed, including
direct encoding, logistics, finance, phase retrieval, machine learning, neural
networks, probabilistic graphical models, and image processing, among many
others. The main directions of technological advancement and associated
challenges in photonics computing are explored, along with an assessment of its
efficiency. Finally, the paper discusses prospects and the field of optical
quantum computing, providing insights into the potential applications of this
technology.Comment: Invited submission by Journal of Advanced Quantum Technologies;
accepted version 5/06/202
Traveling Salesman Problem
This book is a collection of current research in the application of evolutionary algorithms and other optimal algorithms to solving the TSP problem. It brings together researchers with applications in Artificial Immune Systems, Genetic Algorithms, Neural Networks and Differential Evolution Algorithm. Hybrid systems, like Fuzzy Maps, Chaotic Maps and Parallelized TSP are also presented. Most importantly, this book presents both theoretical as well as practical applications of TSP, which will be a vital tool for researchers and graduate entry students in the field of applied Mathematics, Computing Science and Engineering
Recommended from our members
Unconventional computing platforms and nature-inspired methods for solving hard optimisation problems
The search for novel hardware beyond the traditional von Neumann architecture has given rise to a modern area of unconventional computing requiring the efforts of mathematicians, physicists and engineers. Many analogue physical systems, including networks of nonlinear oscillators, lasers, condensates, and superconducting qubits, are proposed and realised to address challenging computational problems from various areas of social and physical sciences and technology. Understanding the underlying physical process by which the system finds the solutions to such problems often leads to new optimisation algorithms. This thesis focuses on studying gain-dissipative systems and nature-inspired algorithms that form a hybrid architecture that may soon rival classical hardware.
Chapter 1 lays the necessary foundation and explains various interdisciplinary terms that are used throughout the dissertation. In particular, connections between the optimisation problems and spin Hamiltonians are established, their computational complexity classes are explained, and the most prominent physical platforms for spin Hamiltonian implementation are reviewed.
Chapter 2 demonstrates a large variety of behaviours encapsulated in networks of polariton condensates, which are a vivid example of a gain-dissipative system we use throughout the thesis. We explain how the variations of experimentally tunable parameters allow the networks of polariton condensates to represent different oscillator models. We derive analytic expressions for the interactions between two spatially separated polariton condensates and show various synchronisation regimes for periodic chains of condensates. An odd number of condensates at the vertices of a regular polygon leads to a spontaneous formation of a giant multiply-quantised vortex at the centre of a polygon. Numerical simulations of all studied configurations of polariton condensates are performed with a mean-field approach with some theoretically proposed physical phenomena supported by the relevant experiments.
Chapter 3 examines the potential of polariton graphs to find the low-energy minima of the spin Hamiltonians. By associating a spin with a condensate phase, the minima of the XY model are achieved for simple configurations of spatially-interacting polariton condensates. We argue that such implementation of gain-dissipative simulators limits their applicability to the classes of easily solvable problems since the parameters of a particular Hamiltonian depend on the node occupancies that are not known a priori. To overcome this difficulty, we propose to adjust pumping intensities and coupling strengths dynamically. We further theoretically suggest how the discrete Ising and -state planar Potts models with or without external fields can be simulated using gain-dissipative platforms. The underlying operational principle originates from a combination of resonant and non-resonant pumping. Spatial anisotropy of pump and dissipation profiles enables an effective control of the sign and intensity of the coupling strength between any two neighbouring sites, which we demonstrate with a two dimensional square lattice of polariton condensates. For an accurate minimisation of discrete and continuous spin Hamiltonians, we propose a fully controllable polaritonic XY-Ising machine based on a network of geometrically isolated polariton condensates.
In Chapter 4, we look at classical computing rivals and study nature-inspired methods for optimising spin Hamiltonians. Based on the operational principles of gain-dissipative machines, we develop a novel class of gain-dissipative algorithms for the optimisation of discrete and continuous problems and show its performance in comparison with traditional optimisation techniques. Besides looking at traditional heuristic methods for Ising minimisation, such as the Hopfield-Tank neural networks and parallel tempering, we consider a recent physics-inspired algorithm, namely chaotic amplitude control, and exact commercial solver, Gurobi. For a proper evaluation of physical simulators, we further discuss the importance of detecting easy instances of hard combinatorial optimisation problems. The Ising model for certain interaction matrices, that are commonly used for evaluating the performance of unconventional computing machines and assumed to be exponentially hard, is shown to be solvable in polynomial time including the Mobius ladder graphs and Mattis spin glasses.
In Chapter 5 we discuss possible future applications of unconventional computing platforms including emulation of search algorithms such as PageRank, realisation of a proof-of-work protocol for blockchain technology, and reservoir computing
- âŚ