444 research outputs found
Deterministic Annealing and Nonlinear Assignment
For combinatorial optimization problems that can be formulated as Ising or
Potts spin systems, the Mean Field (MF) approximation yields a versatile and
simple ANN heuristic, Deterministic Annealing. For assignment problems the
situation is more complex -- the natural analog of the MF approximation lacks
the simplicity present in the Potts and Ising cases. In this article the
difficulties associated with this issue are investigated, and the options for
solving them discussed. Improvements to existing Potts-based MF-inspired
heuristics are suggested, and the possibilities for defining a proper
variational approach are scrutinized.Comment: 15 pages, 3 figure
Analog Photonics Computing for Information Processing, Inference and Optimisation
This review presents an overview of the current state-of-the-art in photonics
computing, which leverages photons, photons coupled with matter, and
optics-related technologies for effective and efficient computational purposes.
It covers the history and development of photonics computing and modern
analogue computing platforms and architectures, focusing on optimization tasks
and neural network implementations. The authors examine special-purpose
optimizers, mathematical descriptions of photonics optimizers, and their
various interconnections. Disparate applications are discussed, including
direct encoding, logistics, finance, phase retrieval, machine learning, neural
networks, probabilistic graphical models, and image processing, among many
others. The main directions of technological advancement and associated
challenges in photonics computing are explored, along with an assessment of its
efficiency. Finally, the paper discusses prospects and the field of optical
quantum computing, providing insights into the potential applications of this
technology.Comment: Invited submission by Journal of Advanced Quantum Technologies;
accepted version 5/06/202
Random Neural Networks and Optimisation
In this thesis we introduce new models and learning algorithms for the Random
Neural Network (RNN), and we develop RNN-based and other approaches for the
solution of emergency management optimisation problems.
With respect to RNN developments, two novel supervised learning algorithms are
proposed. The first, is a gradient descent algorithm for an RNN extension model
that we have introduced, the RNN with synchronised interactions (RNNSI), which
was inspired from the synchronised firing activity observed in brain neural circuits.
The second algorithm is based on modelling the signal-flow equations in RNN as a
nonnegative least squares (NNLS) problem. NNLS is solved using a limited-memory
quasi-Newton algorithm specifically designed for the RNN case.
Regarding the investigation of emergency management optimisation problems,
we examine combinatorial assignment problems that require fast, distributed and
close to optimal solution, under information uncertainty. We consider three different
problems with the above characteristics associated with the assignment of
emergency units to incidents with injured civilians (AEUI), the assignment of assets
to tasks under execution uncertainty (ATAU), and the deployment of a robotic
network to establish communication with trapped civilians (DRNCTC).
AEUI is solved by training an RNN tool with instances of the optimisation problem
and then using the trained RNN for decision making; training is achieved using
the developed learning algorithms. For the solution of ATAU problem, we introduce
two different approaches. The first is based on mapping parameters of the
optimisation problem to RNN parameters, and the second on solving a sequence of
minimum cost flow problems on appropriately constructed networks with estimated
arc costs. For the exact solution of DRNCTC problem, we develop a mixed-integer
linear programming formulation, which is based on network flows. Finally, we design
and implement distributed heuristic algorithms for the deployment of robots
when the civilian locations are known or uncertain
A neurodynamic optimization approach to constrained pseudoconvex optimization.
Guo, Zhishan.Thesis (M.Phil.)--Chinese University of Hong Kong, 2011.Includes bibliographical references (p. 71-82).Abstracts in English and Chinese.Abstract --- p.iAcknowledgement i --- p.iiChapter 1 --- Introduction --- p.1Chapter 1.1 --- Constrained Pseudoconvex Optimization --- p.1Chapter 1.2 --- Recurrent Neural Networks --- p.4Chapter 1.3 --- Thesis Organization --- p.7Chapter 2 --- Literature Review --- p.8Chapter 2.1 --- Pseudo convex Optimization --- p.8Chapter 2.2 --- Recurrent Neural Networks --- p.10Chapter 3 --- Model Description and Convergence Analysis --- p.17Chapter 3.1 --- Model Descriptions --- p.18Chapter 3.2 --- Global Convergence --- p.20Chapter 4 --- Numerical Examples --- p.27Chapter 4.1 --- Gaussian Optimization --- p.28Chapter 4.2 --- Quadratic Fractional Programming --- p.36Chapter 4.3 --- Nonlinear Convex Programming --- p.39Chapter 5 --- Real-time Data Reconciliation --- p.42Chapter 5.1 --- Introduction --- p.42Chapter 5.2 --- Theoretical Analysis and Performance Measurement --- p.44Chapter 5.3 --- Examples --- p.45Chapter 6 --- Real-time Portfolio Optimization --- p.53Chapter 6.1 --- Introduction --- p.53Chapter 6.2 --- Model Description --- p.54Chapter 6.3 --- Theoretical Analysis --- p.56Chapter 6.4 --- Illustrative Examples --- p.58Chapter 7 --- Conclusions and Future Works --- p.67Chapter 7.1 --- Concluding Remarks --- p.67Chapter 7.2 --- Future Works --- p.68Chapter A --- Publication List --- p.69Bibliography --- p.7
I. On a Family of Generalized Colorings. II. Some Contributions to the Theory of Neural Networks. III. Embeddings of Ultrametric Spaces
This thesis comprises three apparently very independent parts. However, there is a unity behind I would like to sketch very briefly.
Formally graphs are in the background of most chapters and so is the duality local versus global. The first section is concerned with globally coloring graphs under some local assumptions. Algorithmically it is an intrinsically difficult task and neural networks, the topic of the second part can be used to approach intractable problems. Simple local interactions with emergent collective behavior are one of the essential features of these networks. Their current models are similar to some of those encountered in statistical mechanics, like spin glasses. In the third part, we study ultrametricity, a concept recently rediscovered by theoretical physicists in the analysis of spin-glasses. Ultrametricity can be expressed as a local constraint on the shape of each triangle of the given metric space.
Unless otherwise stated, results in the first and second part are essentially original. Since the third part represents a joint work with Michael Aschbacher, Eric Baum and Richard Wilson, I should perhaps try to outline my contribution though paternity of collective results is somewhat fuzzy. While working on neural networks and spin glasses Eric and I got interested in ultrametricity. Several of us had found an initial polynomial upper bound, but the final results of "n + 1" was first reached independently by Michael and Richard. I think I obtained the theorems: 4.5, 6.1, 6.3 (using an idea of Eric), 6.4, 6.5, 6.6, 6.7 (with Richard and helpful references from Bruce Rothschild and Olga Taussky) and participated in some other results.</p
- …