630 research outputs found

    DeepACO: Neural-enhanced Ant Systems for Combinatorial Optimization

    Full text link
    Ant Colony Optimization (ACO) is a meta-heuristic algorithm that has been successfully applied to various Combinatorial Optimization Problems (COPs). Traditionally, customizing ACO for a specific problem requires the expert design of knowledge-driven heuristics. In this paper, we propose DeepACO, a generic framework that leverages deep reinforcement learning to automate heuristic designs. DeepACO serves to strengthen the heuristic measures of existing ACO algorithms and dispense with laborious manual design in future ACO applications. As a neural-enhanced meta-heuristic, DeepACO consistently outperforms its ACO counterparts on eight COPs using a single neural model and a single set of hyperparameters. As a Neural Combinatorial Optimization method, DeepACO performs better than or on par with problem-specific methods on canonical routing problems. Our code is publicly available at https://github.com/henry-yeh/DeepACO.Comment: Accepted at NeurIPS 202

    On the role of metaheuristic optimization in bioinformatics

    Get PDF
    Metaheuristic algorithms are employed to solve complex and large-scale optimization problems in many different fields, from transportation and smart cities to finance. This paper discusses how metaheuristic algorithms are being applied to solve different optimization problems in the area of bioinformatics. While the text provides references to many optimization problems in the area, it focuses on those that have attracted more interest from the optimization community. Among the problems analyzed, the paper discusses in more detail the molecular docking problem, the protein structure prediction, phylogenetic inference, and different string problems. In addition, references to other relevant optimization problems are also given, including those related to medical imaging or gene selection for classification. From the previous analysis, the paper generates insights on research opportunities for the Operations Research and Computer Science communities in the field of bioinformatics

    Reinforcement learning in large state action spaces

    Get PDF
    Reinforcement learning (RL) is a promising framework for training intelligent agents which learn to optimize long term utility by directly interacting with the environment. Creating RL methods which scale to large state-action spaces is a critical problem towards ensuring real world deployment of RL systems. However, several challenges limit the applicability of RL to large scale settings. These include difficulties with exploration, low sample efficiency, computational intractability, task constraints like decentralization and lack of guarantees about important properties like performance, generalization and robustness in potentially unseen scenarios. This thesis is motivated towards bridging the aforementioned gap. We propose several principled algorithms and frameworks for studying and addressing the above challenges RL. The proposed methods cover a wide range of RL settings (single and multi-agent systems (MAS) with all the variations in the latter, prediction and control, model-based and model-free methods, value-based and policy-based methods). In this work we propose the first results on several different problems: e.g. tensorization of the Bellman equation which allows exponential sample efficiency gains (Chapter 4), provable suboptimality arising from structural constraints in MAS(Chapter 3), combinatorial generalization results in cooperative MAS(Chapter 5), generalization results on observation shifts(Chapter 7), learning deterministic policies in a probabilistic RL framework(Chapter 6). Our algorithms exhibit provably enhanced performance and sample efficiency along with better scalability. Additionally, we also shed light on generalization aspects of the agents under different frameworks. These properties have been been driven by the use of several advanced tools (e.g. statistical machine learning, state abstraction, variational inference, tensor theory). In summary, the contributions in this thesis significantly advance progress towards making RL agents ready for large scale, real world applications

    Sparse inverse covariance estimation in Gaussian graphical models

    Get PDF
    One of the fundamental tasks in science is to find explainable relationships between observed phenomena. Recent work has addressed this problem by attempting to learn the structure of graphical models - especially Gaussian models - by the imposition of sparsity constraints. The graphical lasso is a popular method for learning the structure of a Gaussian model. It uses regularisation to impose sparsity. In real-world problems, there may be latent variables that confound the relationships between the observed variables. Ignoring these latents, and imposing sparsity in the space of the visibles, may lead to the pruning of important structural relationships. We address this problem by introducing an expectation maximisation (EM) method for learning a Gaussian model that is sparse in the joint space of visible and latent variables. By extending this to a conditional mixture, we introduce multiple structures, and allow side information to be used to predict which structure is most appropriate for each data point. Finally, we handle non-Gaussian data by extending each sparse latent Gaussian to a Gaussian copula. We train these models on a financial data set; we find the structures to be interpretable, and the new models to perform better than their existing competitors. A potential problem with the mixture model is that it does not require the structure to persist in time, whereas this may be expected in practice. So we construct an input-output HMM with sparse Gaussian emissions. But the main result is that, provided the side information is rich enough, the temporal component of the model provides little benefit, and reduces efficiency considerably. The GWishart distribution may be used as the basis for a Bayesian approach to learning a sparse Gaussian. However, sampling from this distribution often limits the efficiency of inference in these models. We make a small change to the state-of-the-art block Gibbs sampler to improve its efficiency. We then introduce a Hamiltonian Monte Carlo sampler that is much more efficient than block Gibbs, especially in high dimensions. We use these samplers to compare a Bayesian approach to learning a sparse Gaussian with the (non-Bayesian) graphical lasso. We find that, even when limited to the same time budget, the Bayesian method can perform better. In summary, this thesis introduces practically useful advances in structure learning for Gaussian graphical models and their extensions. The contributions include the addition of latent variables, a non-Gaussian extension, (temporal) conditional mixtures, and methods for efficient inference in a Bayesian formulation

    Type-2 fuzzy logic system applications for power systems

    Get PDF
    PhD ThesisIn the move towards ubiquitous information & communications technology, an opportunity for further optimisation of the power system as a whole has arisen. Nonetheless, the fast growth of intermittent generation concurrently with markets deregulation is driving a need for timely algorithms that can derive value from these new data sources. Type-2 fuzzy logic systems can offer approximate solutions to these computationally hard tasks by expressing non-linear relationships in a more flexible fashion. This thesis explores how type-2 fuzzy logic systems can provide solutions to two of these challenging power system problems; short-term load forecasting and voltage control in distribution networks. On one hand, time-series forecasting is a key input for economic secure power systems as there are many tasks that require a precise determination of the future short-term load (e.g. unit commitment or security assessment among others), but also when dealing with electricity as commodity. As a consequence, short-term load forecasting becomes essential for energy stakeholders and any inaccuracy can be directly translated into their financial performance. All these is reflected in current power systems literature trends where a significant number of papers cover the subject. Extending the existing literature, this work focuses in how these should be implemented from beginning to end to bring to light their predictive performance. Following this research direction, this thesis introduces a novel framework to automatically design type-2 fuzzy logic systems. On the other hand, the low-carbon economy is pushing the grid status even closer to its operational limits. Distribution networks are becoming active systems with power flows and voltages defined not only by load, but also by generation. As consequence, even if it is not yet absolutely clear how power systems will evolve in the long-term, all plausible future scenarios claim for real-time algorithms that can provide near optimal solutions to this challenging mixed-integer non-linear problem. Aligned with research and industry efforts, this thesis introduces a scalable implementation to tackle this task in divide-and-conquer fashio
    corecore