317 research outputs found
Random Neural Networks and Optimisation
In this thesis we introduce new models and learning algorithms for the Random
Neural Network (RNN), and we develop RNN-based and other approaches for the
solution of emergency management optimisation problems.
With respect to RNN developments, two novel supervised learning algorithms are
proposed. The first, is a gradient descent algorithm for an RNN extension model
that we have introduced, the RNN with synchronised interactions (RNNSI), which
was inspired from the synchronised firing activity observed in brain neural circuits.
The second algorithm is based on modelling the signal-flow equations in RNN as a
nonnegative least squares (NNLS) problem. NNLS is solved using a limited-memory
quasi-Newton algorithm specifically designed for the RNN case.
Regarding the investigation of emergency management optimisation problems,
we examine combinatorial assignment problems that require fast, distributed and
close to optimal solution, under information uncertainty. We consider three different
problems with the above characteristics associated with the assignment of
emergency units to incidents with injured civilians (AEUI), the assignment of assets
to tasks under execution uncertainty (ATAU), and the deployment of a robotic
network to establish communication with trapped civilians (DRNCTC).
AEUI is solved by training an RNN tool with instances of the optimisation problem
and then using the trained RNN for decision making; training is achieved using
the developed learning algorithms. For the solution of ATAU problem, we introduce
two different approaches. The first is based on mapping parameters of the
optimisation problem to RNN parameters, and the second on solving a sequence of
minimum cost flow problems on appropriately constructed networks with estimated
arc costs. For the exact solution of DRNCTC problem, we develop a mixed-integer
linear programming formulation, which is based on network flows. Finally, we design
and implement distributed heuristic algorithms for the deployment of robots
when the civilian locations are known or uncertain
On the controllability of fermentation systems
This thesis concerns the controllability of fermentation processes. Fermentation
processes are often described by unstructured process models. A control system can
be used to reduce the effect of the uncertainties and disturbances.
A process is called controllable if a control system satisfying suitably defined control
objectives can be found. Controllability measures based on linear process models are
identified. The idealised control objective for perfect control allows fast evaluation
of the controllability measures. These measures are applied to compare different
designs of a continuous fermentation process by identifying the controllability properties
of the process design.
The operational mode of fed batch fermentations is inherently dynamic. General
control system design methods are not readily applicable to such systems. This work
presents an approach for the design of robust controllers suitable for these processes.
The control objective is to satisfy a set of robustness constraints for a given set of
model uncertainties and disturbances.
The optimal operation and design problems are combined into a single optimal control
problem. The controller design is integrated into the process design problem
formulation. In this way the control system and the process are designed simultaneously.
Different problem formulations are investigated. The proposed approach is
demonstrated on complex fermentation models. The resulting operating strategies
are controllable with respect to the aims of control
Efficient information collection in stochastic optimisation
This thesis focuses on a class of information collection problems in stochastic optimisation. Algorithms in this area often need to measure the performances of several potential solutions, and use the collected information in their search for high-performance solutions, but only have a limited budget for measuring. A simple approach that allocates simulation time equally over all potential solutions may waste time in collecting additional data for the alternatives that can be quickly identified as non-promising. Instead, algorithms should amend their measurement strategy to iteratively examine the statistical evidences collected thus far and focus computational efforts on the most promising alternatives. This thesis develops new efficient methods of collecting information to be used in stochastic optimisation problems.
First, we investigate an efficient measurement strategy used for the solution selection procedure of two-stage linear stochastic programs. In the solution selection procedure, finite computational resources must be allocated among numerous potential solutions to estimate their performances and identify the best solution. We propose a two-stage sampling approach that exploits a Wasserstein-based screening rule and an optimal computing budget allocation technique to improve the efficiency of obtaining a high-quality solution. Numerical results show our method provides good trade-offs between computational effort and solution performance.
Then, we address the information collection problems that are encountered in the search for robust solutions. Specifically, we use an evolutionary strategy to solve a class of simulation optimisation problems with computationally expensive blackbox functions. We implement an archive sample approximation method to ix reduce the required number of evaluations. The main challenge in the application of this method is determining the locations of additional samples drawn in each generation to enrich the information in the archive and minimise the approximation error. We propose novel sampling strategies by using the Wasserstein metric to estimate the possible benefit of a potential sample location on the approximation error. An empirical comparison with several previously proposed archive-based sample approximation methods demonstrates the superiority of our approaches.
In the final part of this thesis, we propose an adaptive sampling strategy for the rollout algorithm to solve the clinical trial scheduling and resource allocation problem under uncertainty. The proposed sampling strategy method exploits the variance reduction technique of common random numbers and the empirical Bernstein inequality in a statistical racing procedure, which can balance the exploration and exploitation of the rollout algorithm. Moreover, we present an augmented approach that utilises a heuristic-based grouping rule to enhance the simulation efficiency by breaking down the overall action selection problem into a selection problem involving small groups. The numerical results show that the proposed method provides competitive results within a reasonable amount of computational time
Computational and Near-Optimal Trade-Offs in Renewable Electricity System Modelling
In the decades to come, the European electricity system must undergo an unprecedented transformation to avert the devastating impacts of climate change. To devise various possibilities for achieving a sustainable yet cost-efficient system, in the thesis at hand, we solve large optimisation problems that coordinate the siting of generation, storage and transmission capacities. Thereby, it is critical to capture the weather-dependent variability of wind and solar power as well as transmission bottlenecks. In addition to modelling at high spatial and temporal resolution, this requires a detailed representation of the electricity grid. However, since the resulting computational challenges limit what can be investigated, compromises on model accuracy must be made, and methods from informatics become increasingly relevant to formulate models efficiently and to compute many scenarios.
The first part of the thesis is concerned with justifying such trade-offs between model detail and solving times. The main research question is how to circumvent some of the challenging non-convexities introduced by transmission network representations in joint capacity expansion models while still capturing the core grid physics. We first examine tractable linear approximations of power flow and transmission losses. Subsequently, we develop an efficient reformulation of the discrete transmission expansion planning (TEP) problem based on a cycle decomposition of the network graph, which conveniently also accommodates grid synchronisation options. Because discrete investment decisions aggravate the problem\u27s complexity, we also cover simplifying heuristics that make use of sequential linear programming (SLP) and retrospective discretisation techniques.
In the second half, we investigate other trade-offs, namely between least-cost and near-optimal solutions. We systematically explore broad ranges of technologically diverse system configurations that are viable without compromising the system\u27s overall cost-effectiveness. For example, we present solutions that avoid installing onshore wind turbines, bypass new overhead transmission lines, or feature a more regionally balanced distribution of generation capacities. Such alternative designs may be more widely socially accepted, and, thus, knowing about these degrees of freedom is highly policy-relevant. The method we employ to span the space of near-optimal solutions is related to modelling-to-generate-alternatives, a variant of multi-objective optimisation. The robustness of our results is further strengthened by considering technology cost uncertainties. To efficiently sweep the cost parameter space, we leverage multi-fidelity surrogate modelling techniques using sparse polynomial chaos expansion in combination with low-discrepancy sampling and extensive parallelisation on high-performance computing infrastructure
Mathematical programming heuristics for nonstationary stochastic inventory control
This work focuses on the computation of near-optimal inventory policies for a
wide range of problems in the field of nonstationary stochastic inventory control.
These problems are modelled and solved by leveraging novel mathematical programming
models built upon the application of stochastic programming bounding
techniques: Jensen's lower bound and Edmundson-Madanski upper bound.
The single-item single-stock location inventory problem under the classical
assumption of independent demand is a long-standing problem in the literature
of stochastic inventory control. The first contribution hereby presented is the
development of the first mathematical programming based model for computing
near-optimal inventory policy parameters for this problem; the model is then
paired with a binary search procedure to tackle large-scale problems.
The second contribution is to relax the independence assumption and investigate
the case in which demand in different periods is correlated. More specifically,
this work introduces the first stochastic programming model that captures Bookbinder
and Tan's static-dynamic uncertainty control policy under nonstationary
correlated demand; in addition, it discusses a mathematical programming heuristic
that computes near-optimal policy parameters under normally distributed
demand featuring correlation, as well as under a collection of time-series-based
demand process.
Finally, the third contribution is to consider a multi-item stochastic inventory
system subject to joint replenishment costs. This work presents the first mathematical
programming heuristic for determining near-optimal inventory policy
parameters for this system. This model comes with the advantage of tackling
nonstationary demand, a variant which has not been previously explored in the
literature.
Unlike other existing approaches in the literature, these mathematical programming
models can be easily implemented and solved by using off-the-shelf
mathematical programming packages, such as IBM ILOG optimisation studio
and XPRESS Optimizer; and do not require tedious computer coding.
Extensive computational studies demonstrate that these new models are competitive
in terms of cost performance: in the case of independent demand, they
provide the best optimality gap in the literature; in the case of correlated demand,
they yield tight optimality gap; in the case of nonstationary joint replenishment
problem, they are competitive with state-of-the-art approaches in the literature
and come with the advantage of being able to tackle nonstationary problems
Database query optimisation based on measures of regret
The query optimiser in a database management system (DBMS) is responsible for
�nding a good order in which to execute the operators in a given query. However, in
practice the query optimiser does not usually guarantee to �nd the best plan. This is
often due to the non-availability of precise statistical data or inaccurate assumptions
made by the optimiser. In this thesis we propose a robust approach to logical query
optimisation that takes into account the unreliability in database statistics during
the optimisation process. In particular, we study the ordering problem for selection
operators and for join operators, where selectivities are modelled as intervals rather
than exact values. As a measure of optimality, we use a concept from decision theory
called minmax regret optimisation (MRO).
When using interval selectivities, the decision problem for selection operator ordering
turns out to be NP-hard. After investigating properties of the problem and
identifying special cases which can be solved in polynomial time, we develop a novel
heuristic for solving the general selection ordering problem in polynomial time. Experimental
evaluation of the heuristic using synthetic data, the Star Schema Benchmark
and real-world data sets shows that it outperforms other heuristics (which take
an optimistic, pessimistic or midpoint approach) and also produces plans whose regret
is on average very close to optimal.
The general join ordering problem is known to be NP-hard, even for exact selectivities.
So, for interval selectivities, we restrict our investigation to sets of join
operators which form a chain and to plans that correspond to left-deep join trees.
We investigate properties of the problem and use these, along with ideas from the
selection ordering heuristic and other algorithms in the literature, to develop a
polynomial-time heuristic tailored for the join ordering problem. Experimental evaluation
of the heuristic shows that, once again, it performs better than the optimistic,
pessimistic and midpoint heuristics. In addition, the results show that the heuristic
produces plans whose regret is on average even closer to the optimal than for
selection ordering
A fully adaptive multilevel stochastic collocation strategy for solving elliptic PDEs with random data
We propose and analyse a fully adaptive strategy for solving elliptic PDEs
with random data in this work. A hierarchical sequence of adaptive mesh
refinements for the spatial approximation is combined with adaptive anisotropic
sparse Smolyak grids in the stochastic space in such a way as to minimize the
computational cost. The novel aspect of our strategy is that the hierarchy of
spatial approximations is sample dependent so that the computational effort at
each collocation point can be optimised individually. We outline a rigorous
analysis for the convergence and computational complexity of the adaptive
multilevel algorithm and we provide optimal choices for error tolerances at
each level. Two numerical examples demonstrate the reliability of the error
control and the significant decrease in the complexity that arises when
compared to single level algorithms and multilevel algorithms that employ
adaptivity solely in the spatial discretisation or in the collocation
procedure.Comment: 26 pages, 7 figure
A Survey of Contextual Optimization Methods for Decision Making under Uncertainty
Recently there has been a surge of interest in operations research (OR) and
the machine learning (ML) community in combining prediction algorithms and
optimization techniques to solve decision-making problems in the face of
uncertainty. This gave rise to the field of contextual optimization, under
which data-driven procedures are developed to prescribe actions to the
decision-maker that make the best use of the most recently updated information.
A large variety of models and methods have been presented in both OR and ML
literature under a variety of names, including data-driven optimization,
prescriptive optimization, predictive stochastic programming, policy
optimization, (smart) predict/estimate-then-optimize, decision-focused
learning, (task-based) end-to-end learning/forecasting/optimization, etc.
Focusing on single and two-stage stochastic programming problems, this review
article identifies three main frameworks for learning policies from data and
discusses their strengths and limitations. We present the existing models and
methods under a uniform notation and terminology and classify them according to
the three main frameworks identified. Our objective with this survey is to both
strengthen the general understanding of this active field of research and
stimulate further theoretical and algorithmic advancements in integrating ML
and stochastic programming
- …