2,254 research outputs found
Random Neural Networks and Optimisation
In this thesis we introduce new models and learning algorithms for the Random
Neural Network (RNN), and we develop RNN-based and other approaches for the
solution of emergency management optimisation problems.
With respect to RNN developments, two novel supervised learning algorithms are
proposed. The first, is a gradient descent algorithm for an RNN extension model
that we have introduced, the RNN with synchronised interactions (RNNSI), which
was inspired from the synchronised firing activity observed in brain neural circuits.
The second algorithm is based on modelling the signal-flow equations in RNN as a
nonnegative least squares (NNLS) problem. NNLS is solved using a limited-memory
quasi-Newton algorithm specifically designed for the RNN case.
Regarding the investigation of emergency management optimisation problems,
we examine combinatorial assignment problems that require fast, distributed and
close to optimal solution, under information uncertainty. We consider three different
problems with the above characteristics associated with the assignment of
emergency units to incidents with injured civilians (AEUI), the assignment of assets
to tasks under execution uncertainty (ATAU), and the deployment of a robotic
network to establish communication with trapped civilians (DRNCTC).
AEUI is solved by training an RNN tool with instances of the optimisation problem
and then using the trained RNN for decision making; training is achieved using
the developed learning algorithms. For the solution of ATAU problem, we introduce
two different approaches. The first is based on mapping parameters of the
optimisation problem to RNN parameters, and the second on solving a sequence of
minimum cost flow problems on appropriately constructed networks with estimated
arc costs. For the exact solution of DRNCTC problem, we develop a mixed-integer
linear programming formulation, which is based on network flows. Finally, we design
and implement distributed heuristic algorithms for the deployment of robots
when the civilian locations are known or uncertain
Curriculum learning for multilevel budgeted combinatorial problems
Learning heuristics for combinatorial optimization problems through graph
neural networks have recently shown promising results on some classic NP-hard
problems. These are single-level optimization problems with only one player.
Multilevel combinatorial optimization problems are their generalization,
encompassing situations with multiple players taking decisions sequentially. By
framing them in a multi-agent reinforcement learning setting, we devise a
value-based method to learn to solve multilevel budgeted combinatorial problems
involving two players in a zero-sum game over a graph. Our framework is based
on a simple curriculum: if an agent knows how to estimate the value of
instances with budgets up to , then solving instances with budget can
be done in polynomial time regardless of the direction of the optimization by
checking the value of every possible afterstate. Thus, in a bottom-up approach,
we generate datasets of heuristically solved instances with increasingly larger
budgets to train our agent. We report results close to optimality on graphs up
to nodes and a speedup on average compared to the quickest
exact solver known for the Multilevel Critical Node problem, a max-min-max
trilevel problem that has been shown to be at least -hard
Combined optimization algorithms applied to pattern classification
Accurate classification by minimizing the error on test samples is the main
goal in pattern classification. Combinatorial optimization is a well-known
method for solving minimization problems, however, only a few examples of
classifiers axe described in the literature where combinatorial optimization is
used in pattern classification. Recently, there has been a growing interest
in combining classifiers and improving the consensus of results for a greater
accuracy. In the light of the "No Ree Lunch Theorems", we analyse the combination
of simulated annealing, a powerful combinatorial optimization method
that produces high quality results, with the classical perceptron algorithm.
This combination is called LSA machine. Our analysis aims at finding paradigms
for problem-dependent parameter settings that ensure high classifica,
tion results. Our computational experiments on a large number of benchmark
problems lead to results that either outperform or axe at least competitive to
results published in the literature. Apart from paxameter settings, our analysis
focuses on a difficult problem in computation theory, namely the network
complexity problem. The depth vs size problem of neural networks is one of
the hardest problems in theoretical computing, with very little progress over
the past decades. In order to investigate this problem, we introduce a new
recursive learning method for training hidden layers in constant depth circuits.
Our findings make contributions to a) the field of Machine Learning, as the
proposed method is applicable in training feedforward neural networks, and to
b) the field of circuit complexity by proposing an upper bound for the number
of hidden units sufficient to achieve a high classification rate. One of the major
findings of our research is that the size of the network can be bounded by
the input size of the problem and an approximate upper bound of 8 + √2n/n
threshold gates as being sufficient for a small error rate, where n := log/SL
and SL is the training set
A Survey on Influence Maximization: From an ML-Based Combinatorial Optimization
Influence Maximization (IM) is a classical combinatorial optimization
problem, which can be widely used in mobile networks, social computing, and
recommendation systems. It aims at selecting a small number of users such that
maximizing the influence spread across the online social network. Because of
its potential commercial and academic value, there are a lot of researchers
focusing on studying the IM problem from different perspectives. The main
challenge comes from the NP-hardness of the IM problem and \#P-hardness of
estimating the influence spread, thus traditional algorithms for overcoming
them can be categorized into two classes: heuristic algorithms and
approximation algorithms. However, there is no theoretical guarantee for
heuristic algorithms, and the theoretical design is close to the limit.
Therefore, it is almost impossible to further optimize and improve their
performance. With the rapid development of artificial intelligence, the
technology based on Machine Learning (ML) has achieved remarkable achievements
in many fields. In view of this, in recent years, a number of new methods have
emerged to solve combinatorial optimization problems by using ML-based
techniques. These methods have the advantages of fast solving speed and strong
generalization ability to unknown graphs, which provide a brand-new direction
for solving combinatorial optimization problems. Therefore, we abandon the
traditional algorithms based on iterative search and review the recent
development of ML-based methods, especially Deep Reinforcement Learning, to
solve the IM problem and other variants in social networks. We focus on
summarizing the relevant background knowledge, basic principles, common
methods, and applied research. Finally, the challenges that need to be solved
urgently in future IM research are pointed out.Comment: 45 page
- …