559 research outputs found
Adaptive traffic signal control using approximate dynamic programming
This thesis presents a study on an adaptive traffic signal controller for real-time operation.
An approximate dynamic programming (ADP) algorithm is developed for controlling traffic
signals at isolated intersection and in distributed traffic networks. This approach is derived
from the premise that classic dynamic programming is computationally difficult to solve, and
approximation is the second-best option for establishing sequential decision-making for
complex process. The proposed ADP algorithm substantially reduces computational burden
by using a linear approximation function to replace the exact value function of dynamic
programming solution. Machine-learning techniques are used to improve the approximation
progressively. Not knowing the ideal response for the approximation to learn from, we use the
paradigm of unsupervised learning, and reinforcement learning in particular. Temporal-difference
learning and perturbation learning are investigated as appropriate candidates in the
family of unsupervised learning. We find in computer simulation that the proposed method
achieves substantial reduction in vehicle delays in comparison with optimised fixed-time
plans, and is competitive against other adaptive methods in computational efficiency and
effectiveness in managing varying traffic. Our results show that substantial benefits can be
gained by increasing the frequency at which the signal plans are revised. The proposed ADP
algorithm is in compliance with a range of discrete systems of resolution from 0.5 to 5
seconds per temporal step. This study demonstrates the readiness of the proposed approach
for real-time operations at isolated intersections and the potentials for distributed network
control
Novel sampling techniques for reservoir history matching optimisation and uncertainty quantification in flow prediction
Modern reservoir management has an increasing focus on accurately predicting the likely range of field recoveries. A variety of assisted history matching techniques has been developed across the research community concerned with this topic. These techniques are based on obtaining multiple models that closely reproduce the historical flow behaviour of a reservoir. The set of resulted history matched models is then used to quantify uncertainty in predicting the future performance of the reservoir and providing economic evaluations for different field development strategies. The key step in this workflow is to employ algorithms that sample the parameter space in an efficient but appropriate manner. The algorithm choice has an impact on how fast a model is obtained and how well the model fits the production data. The sampling techniques that have been developed to date include, among others, gradient based methods, evolutionary algorithms, and ensemble Kalman filter (EnKF).
This thesis has investigated and further developed the following sampling and inference techniques: Particle Swarm Optimisation (PSO), Hamiltonian Monte Carlo, and Population Markov Chain Monte Carlo. The inspected techniques have the capability of navigating the parameter space and producing history matched models that can be used to quantify the uncertainty in the forecasts in a faster and more reliable way. The analysis of these techniques, compared with Neighbourhood Algorithm (NA), has shown how the different techniques affect the predicted recovery from petroleum systems and the benefits of the developed methods over the NA.
The history matching problem is multi-objective in nature, with the production data possibly consisting of multiple types, coming from different wells, and collected at different times. Multiple objectives can be constructed from these data and explicitly be
optimised in the multi-objective scheme. The thesis has extended the PSO to handle multi-objective history matching problems in which a number of possible conflicting objectives must be satisfied simultaneously. The benefits and efficiency of innovative multi-objective particle swarm scheme (MOPSO) are demonstrated for synthetic reservoirs. It is demonstrated that the MOPSO procedure can provide a substantial improvement in finding a diverse set of good fitting models with a fewer number of very costly forward simulations runs than the standard single objective case, depending on how the objectives are constructed.
The thesis has also shown how to tackle a large number of unknown parameters through the coupling of high performance global optimisation algorithms, such as PSO, with model reduction techniques such as kernel principal component analysis (PCA), for parameterising spatially correlated random fields. The results of the PSO-PCA coupling applied to a recent SPE benchmark history matching problem have demonstrated that the approach is indeed applicable for practical problems. A comparison of PSO with the EnKF data assimilation method has been carried out and has concluded that both methods have obtained comparable results on the example case. This point reinforces the need for using a range of assisted history matching algorithms for more confidence in predictions
Evolutionary Dynamic Multi-Objective Optimisation : A survey
Peer reviewedPostprin
Numerical methods for control-based continuation of relaxation oscillations
This is the final version. Available on open access from Springer via the DOI in this recordData Availability Statement:
Data sharing is not applicable to this article, as no datasets were generated or analysed during the current study.Control-based continuation (CBC) is an experimental method that can reveal stable and unstable dynamics of physical systems. It extends the path-following principles of numerical continuation to experiments and provides systematic dynamical analyses without the need for mathematical modelling. CBC has seen considerable success in studying the bifurcation structure of mechanical systems. Nevertheless, the method is not practical for studying relaxation oscillations. Large numbers of Fourier modes are required to describe them, and the length of the experiment significantly increases when many Fourier modes are used, as the system must be run to convergence many times. Furthermore, relaxation oscillations often arise in autonomous systems, for which an appropriate phase constraint is required. To overcome these challenges, we introduce an adaptive B-spline discretisation that can produce a parsimonious description of responses that would otherwise require many Fourier modes. We couple this to a novel phase constraint that phase-locks control target and solution phase. Results are demonstrated on simulations of a slow-fast synthetic gene network and an Oregonator model. Our methods extend CBC to a much broader range of systems than have been studied so far, opening up a range of novel experimental opportunities on slow-fast systems.Engineering and Physical Sciences Research Council (EPSRC)European Union Horizon 2020Royal Academy of Engineering (RAE
Optimisation for efficient deep learning
Over the past 10 years there has been a huge advance in the performance power of deep neural networks on many supervised learning tasks. Over this period these models have redefined the state of the art numerous times on many classic machine vision and natural language processing benchmarks. Deep neural networks have also found their way into many real-world applications including chat bots, art generation, voice activated virtual assistants, surveillance, and medical diagnosis systems. Much of the improved performance of these models can be attributed to an increase in scale, which in turn has raised computation and energy costs.
In this thesis we detail approaches of how to reduce the cost of deploying deep neural networks in various settings. We first focus on training efficiency, and to that end we present two optimisation techniques that produce high accuracy models without extensive tuning. These optimisers only have a single fixed maximal step size hyperparameter to cross-validate and we demonstrate that they outperform other comparable methods in a wide range of settings. These approaches do not require the onerous process of finding a good learning rate schedule, which often requires training many versions of the same network, hence they reduce the computation needed. The first of these optimisers is a novel bundle method designed for the interpolation setting. The second demonstrates the effectiveness of a Polyak-like step size in combination with an online estimate of the optimal loss value in the non-interpolating setting.
Next, we turn our attention to training efficient binary networks with both binary parameters and activations. With the right implementation, fully binary networks are highly efficient at inference time, as they can replace the majority of operations with cheaper bit-wise alternatives. This makes them well suited for lightweight or embedded applications. Due to the discrete nature of these models conventional training approaches are not viable. We present a simple and effective alternative to the existing optimisation techniques for these models
International Conference on Continuous Optimization (ICCOPT) 2019 Conference Book
The Sixth International Conference on Continuous Optimization took place on the campus of the Technical University of Berlin, August 3-8, 2019. The ICCOPT is a flagship conference of the Mathematical Optimization Society (MOS), organized every three years. ICCOPT 2019 was hosted by the Weierstrass Institute for Applied Analysis and Stochastics (WIAS) Berlin. It included a Summer School and a Conference with a series of plenary and semi-plenary talks, organized and contributed sessions, and poster sessions.
This book comprises the full conference program. It contains, in particular, the scientific program in survey style as well as with all details, and information on the social program, the venue, special meetings, and more
Reverse engineering a gene network using an asynchronous parallel evolution strategy.
RIGHTS : This article is licensed under the BioMed Central licence at http://www.biomedcentral.com/about/license which is similar to the 'Creative Commons Attribution Licence'. In brief you may : copy, distribute, and display the work; make derivative works; or make commercial use of the work - under the following conditions: the original author must be given credit; for any reuse or distribution, it must be made clear to others what the license terms of this work are.BACKGROUND: The use of reverse engineering methods to infer gene regulatory networks by fitting mathematical models to gene expression data is becoming increasingly popular and successful. However, increasing model complexity means that more powerful global optimisation techniques are required for model fitting. The parallel Lam Simulated Annealing (pLSA) algorithm has been used in such approaches, but recent research has shown that island Evolutionary Strategies can produce faster, more reliable results. However, no parallel island Evolutionary Strategy (piES) has yet been demonstrated to be effective for this task. RESULTS: Here, we present synchronous and asynchronous versions of the piES algorithm, and apply them to a real reverse engineering problem: inferring parameters in the gap gene network. We find that the asynchronous piES exhibits very little communication overhead, and shows significant speed-up for up to 50 nodes: the piES running on 50 nodes is nearly 10 times faster than the best serial algorithm. We compare the asynchronous piES to pLSA on the same test problem, measuring the time required to reach particular levels of residual error, and show that it shows much faster convergence than pLSA across all optimisation conditions tested. CONCLUSIONS: Our results demonstrate that the piES is consistently faster and more reliable than the pLSA algorithm on this problem, and scales better with increasing numbers of nodes. In addition, the piES is especially well suited to further improvements and adaptations: Firstly, the algorithm's fast initial descent speed and high reliability make it a good candidate for being used as part of a global/local search hybrid algorithm. Secondly, it has the potential to be used as part of a hierarchical evolutionary algorithm, which takes advantage of modern multi-core computing architectures
Optimisation heuristics for solving technician and task scheduling problems
Motivated by an underlying industrial demand, solving intractable technician and task
scheduling problems through the use of heuristic and metaheuristic approaches have
long been an active research area within the academic community. Many solution
methodologies, proposed in the literature, have either been developed to solve a particular
variant of the technician and task scheduling problem or are only appropriate for a
specific scale of the problem. The motivation of this research is to find general-purpose
heuristic approaches that can solve variants of technician and task scheduling problems,
at scale, balancing time efficiency and solution quality. The unique challenges include
finding heuristics that are robust, easily adapted to deal with extra constraints, and
scalable, to solve problems that are indicative of the real world.
The research presented in this thesis describes three heuristic methodologies that
have been designed and implemented: (1) the intelligent decision heuristic (which
considers multiple team configuration scenarios and job allocations simultaneously),
(2) the look ahead heuristic (characterised by its ability to consider the impact of
allocation decisions on subsequent stages of the scheduling process), and (3) the greedy
randomized heuristic (which has a flexible allocation approach and is computationally
efficient).
Datasets used to test the three heuristic methodologies include real world problem
instances, instances from the literature, problem instances extended from the literature
to include extra constraints, and, finally, instances created using a data generator. The
datasets used include a broad array of real world constraints (skill requirements, teaming,
priority, precedence, unavailable days, outsourcing, time windows, and location) on a range of problem sizes (5-2500 jobs) to thoroughly investigate the scalability and
robustness of the heuristics.
The key findings presented are that the constraints a problem features and the size
of the problem heavily influence the design and behaviour of the solution approach
used. The contributions of this research are; benchmark datasets indicative of the
real world in terms of both constraints included and problem size, the data generators
developed which enable the creation of data to investigate certain problem aspects,
mathematical formulation of the multi period technician routing and scheduling problem,
and, finally, the heuristics developed which have proved to be robust and scalable
solution methodologies
Recommended from our members
Differential-Algebraic Equations
Differential-Algebraic Equations (DAE) are today an independent field of research, which is gaining in importance and becoming of increasing interest for applications and mathematics itself. This workshop has drawn the balance after about 25 years investigations of DAEs and the research aims of the future were intensively discussed
- …