630 research outputs found
DeepACO: Neural-enhanced Ant Systems for Combinatorial Optimization
Ant Colony Optimization (ACO) is a meta-heuristic algorithm that has been
successfully applied to various Combinatorial Optimization Problems (COPs).
Traditionally, customizing ACO for a specific problem requires the expert
design of knowledge-driven heuristics. In this paper, we propose DeepACO, a
generic framework that leverages deep reinforcement learning to automate
heuristic designs. DeepACO serves to strengthen the heuristic measures of
existing ACO algorithms and dispense with laborious manual design in future ACO
applications. As a neural-enhanced meta-heuristic, DeepACO consistently
outperforms its ACO counterparts on eight COPs using a single neural model and
a single set of hyperparameters. As a Neural Combinatorial Optimization method,
DeepACO performs better than or on par with problem-specific methods on
canonical routing problems. Our code is publicly available at
https://github.com/henry-yeh/DeepACO.Comment: Accepted at NeurIPS 202
On the role of metaheuristic optimization in bioinformatics
Metaheuristic algorithms are employed to solve complex and large-scale optimization problems in many different fields, from transportation and smart cities to finance. This paper discusses how metaheuristic algorithms are being applied to solve different optimization problems in the area of bioinformatics. While the text provides references to many optimization problems in the area, it focuses on those that have attracted more interest from the optimization community. Among the problems analyzed, the paper discusses in more detail the molecular docking problem, the protein structure prediction, phylogenetic inference, and different string problems. In addition, references to other relevant optimization problems are also given, including those related to medical imaging or gene selection for classification. From the previous analysis, the paper generates insights on research opportunities for the Operations Research and Computer Science communities in the field of bioinformatics
Reinforcement learning in large state action spaces
Reinforcement learning (RL) is a promising framework for training intelligent agents which learn to optimize long term utility by directly interacting with the environment. Creating RL methods which scale to large state-action spaces is a critical problem towards ensuring real world deployment of RL systems. However, several challenges limit the applicability of RL to large scale settings. These include difficulties with exploration, low sample efficiency, computational intractability, task constraints like decentralization and lack of guarantees about important properties like performance, generalization and robustness in potentially unseen scenarios.
This thesis is motivated towards bridging the aforementioned gap. We propose several principled algorithms and frameworks for studying and addressing the above challenges RL. The proposed methods cover a wide range of RL settings (single and multi-agent systems (MAS) with all the variations in the latter, prediction and control, model-based and model-free methods, value-based and policy-based methods). In this work we propose the first results on several different problems: e.g. tensorization of the Bellman equation which allows exponential sample efficiency gains (Chapter 4), provable suboptimality arising from structural constraints in MAS(Chapter 3), combinatorial generalization results in cooperative MAS(Chapter 5), generalization results on observation shifts(Chapter 7), learning deterministic policies in a probabilistic RL framework(Chapter 6). Our algorithms exhibit provably enhanced performance and sample efficiency along with better scalability. Additionally, we also shed light on generalization aspects of the agents under different frameworks. These properties have been been driven by the use of several advanced tools (e.g. statistical machine learning, state abstraction, variational inference, tensor theory).
In summary, the contributions in this thesis significantly advance progress towards making RL agents ready for large scale, real world applications
Sparse inverse covariance estimation in Gaussian graphical models
One of the fundamental tasks in science is to find explainable relationships between
observed phenomena. Recent work has addressed this problem by attempting to learn
the structure of graphical models - especially Gaussian models - by the imposition of
sparsity constraints.
The graphical lasso is a popular method for learning the structure of a Gaussian
model. It uses regularisation to impose sparsity. In real-world problems, there may be
latent variables that confound the relationships between the observed variables. Ignoring
these latents, and imposing sparsity in the space of the visibles, may lead to the
pruning of important structural relationships. We address this problem by introducing
an expectation maximisation (EM) method for learning a Gaussian model that is
sparse in the joint space of visible and latent variables. By extending this to a conditional
mixture, we introduce multiple structures, and allow side information to be used
to predict which structure is most appropriate for each data point. Finally, we handle
non-Gaussian data by extending each sparse latent Gaussian to a Gaussian copula. We
train these models on a financial data set; we find the structures to be interpretable, and
the new models to perform better than their existing competitors.
A potential problem with the mixture model is that it does not require the structure
to persist in time, whereas this may be expected in practice. So we construct an input-output
HMM with sparse Gaussian emissions. But the main result is that, provided the
side information is rich enough, the temporal component of the model provides little
benefit, and reduces efficiency considerably.
The GWishart distribution may be used as the basis for a Bayesian approach to
learning a sparse Gaussian. However, sampling from this distribution often limits the
efficiency of inference in these models. We make a small change to the state-of-the-art
block Gibbs sampler to improve its efficiency. We then introduce a Hamiltonian
Monte Carlo sampler that is much more efficient than block Gibbs, especially in high
dimensions. We use these samplers to compare a Bayesian approach to learning a
sparse Gaussian with the (non-Bayesian) graphical lasso. We find that, even when
limited to the same time budget, the Bayesian method can perform better.
In summary, this thesis introduces practically useful advances in structure learning
for Gaussian graphical models and their extensions. The contributions include the addition
of latent variables, a non-Gaussian extension, (temporal) conditional mixtures,
and methods for efficient inference in a Bayesian formulation
Recommended from our members
Generalised Bayesian matrix factorisation models
Factor analysis and related models for probabilistic matrix factorisation are of central importance to the unsupervised analysis of data, with a colourful history more than a century long. Probabilistic models for matrix factorisation allow us to explore the underlying structure in data, and have relevance in a vast number of application areas including collaborative filtering, source separation, missing data imputation, gene expression analysis, information retrieval, computational finance and computer vision, amongst others. This thesis develops generalisations of matrix factorisation models that advance our understanding and enhance the applicability of this important class of models.
The generalisation of models for matrix factorisation focuses on three concerns: widening the applicability of latent variable models to the diverse types of data that are currently available; considering alternative structural forms in the underlying representations that are inferred; and including higher order data structures into the matrix factorisation framework. These three issues reflect the reality of modern data analysis and we develop new models that allow for a principled exploration and use of data in these settings. We place emphasis on Bayesian approaches to learning and the advantages that come with the Bayesian methodology. Our port of departure is a generalisation of latent variable models to members of the exponential family of distributions. This generalisation allows for the analysis of data that may be real-valued, binary, counts, non-negative or a heterogeneous set of these data types. The model unifies various existing models and constructs for unsupervised settings, the complementary framework to the generalised linear models in regression.
Moving to structural considerations, we develop Bayesian methods for learning sparse latent representations. We define ideas of weakly and strongly sparse vectors and investigate the classes of prior distributions that give rise to these forms of sparsity, namely the scale-mixture of Gaussians and the spike-and-slab distribution. Based on these sparsity favouring priors, we develop and compare methods for sparse matrix factorisation and present the first comparison of these sparse learning approaches. As a second structural consideration, we develop models with the ability to generate correlated binary vectors. Moment-matching is used to allow binary data with specified correlation to be generated, based on dichotomisation of the Gaussian distribution. We then develop a novel and simple method for binary PCA based on Gaussian dichotomisation. The third generalisation considers the extension of matrix factorisation models to multi-dimensional arrays of data that are increasingly prevalent. We develop the first Bayesian model for non-negative tensor factorisation and explore the relationship between this model and the previously described models for matrix factorisation.Supported by a Commonwealth Scholarship awarded by the Commonwealth Scholarship and Fellowship Programme (CSFP) [Award number ZACS-2207-363]
Supported by award from the National Research Foundation, South Africa (NRF) [Award number SFH2007072200001
Type-2 fuzzy logic system applications for power systems
PhD ThesisIn the move towards ubiquitous information & communications technology, an
opportunity for further optimisation of the power system as a whole has arisen.
Nonetheless, the fast growth of intermittent generation concurrently with markets
deregulation is driving a need for timely algorithms that can derive value from these
new data sources. Type-2 fuzzy logic systems can offer approximate solutions to
these computationally hard tasks by expressing non-linear relationships in a more
flexible fashion. This thesis explores how type-2 fuzzy logic systems can provide
solutions to two of these challenging power system problems; short-term load
forecasting and voltage control in distribution networks. On one hand, time-series
forecasting is a key input for economic secure power systems as there are many tasks
that require a precise determination of the future short-term load (e.g. unit
commitment or security assessment among others), but also when dealing with
electricity as commodity. As a consequence, short-term load forecasting becomes
essential for energy stakeholders and any inaccuracy can be directly translated into
their financial performance. All these is reflected in current power systems literature
trends where a significant number of papers cover the subject. Extending the existing
literature, this work focuses in how these should be implemented from beginning to
end to bring to light their predictive performance. Following this research direction,
this thesis introduces a novel framework to automatically design type-2 fuzzy logic
systems. On the other hand, the low-carbon economy is pushing the grid status even
closer to its operational limits. Distribution networks are becoming active systems with
power flows and voltages defined not only by load, but also by generation. As
consequence, even if it is not yet absolutely clear how power systems will evolve in
the long-term, all plausible future scenarios claim for real-time algorithms that can
provide near optimal solutions to this challenging mixed-integer non-linear problem.
Aligned with research and industry efforts, this thesis introduces a scalable
implementation to tackle this task in divide-and-conquer fashio
- …