154 research outputs found

    Learning to compare nodes in branch and bound with graph neural networks

    Full text link
    En informatique, la résolution de problèmes NP-difficiles en un temps raisonnable est d’une grande importance : optimisation de la chaîne d’approvisionnement, planification, routage, alignement de séquences biologiques multiples, inference dans les modèles graphiques pro- babilistes, et même certains problèmes de cryptographie sont tous des examples de la classe NP-complet. En pratique, nous modélisons beaucoup d’entre eux comme un problème d’op- timisation en nombre entier, que nous résolvons à l’aide de la méthodologie séparation et évaluation. Un algorithme de ce style divise un espace de recherche pour l’explorer récursi- vement (séparation), et obtient des bornes d’optimalité en résolvant des relaxations linéaires sur les sous-espaces (évaluation). Pour spécifier un algorithme, il faut définir plusieurs pa- ramètres, tel que la manière d’explorer les espaces de recherche, de diviser une recherche l’espace une fois exploré, ou de renforcer les relaxations linéaires. Ces politiques peuvent influencer considérablement la performance de résolution. Ce travail se concentre sur une nouvelle manière de dériver politique de recherche, c’est à dire le choix du prochain sous-espace à séparer étant donné une partition en cours, en nous servant de l’apprentissage automatique profond. Premièrement, nous collectons des données résumant, sur une collection de problèmes donnés, quels sous-espaces contiennent l’optimum et quels ne le contiennent pas. En représentant ces sous-espaces sous forme de graphes bipartis qui capturent leurs caractéristiques, nous entraînons un réseau de neurones graphiques à déterminer la probabilité qu’un sous-espace contienne la solution optimale par apprentissage supervisé. Le choix d’un tel modèle est particulièrement utile car il peut s’adapter à des problèmes de différente taille sans modifications. Nous montrons que notre approche bat celle de nos concurrents, consistant à des modèles d’apprentissage automatique plus simples entraînés à partir des statistiques du solveur, ainsi que la politique par défaut de SCIP, un solveur open-source compétitif, sur trois familles NP-dures: des problèmes de recherche de stables de taille maximum, de flots de réseau multicommodité à charge fixe, et de satisfiabilité maximum.In computer science, solving NP-hard problems in a reasonable time is of great importance, such as in supply chain optimization, scheduling, routing, multiple biological sequence align- ment, inference in probabilistic graphical models, and even some problems in cryptography. In practice, we model many of them as a mixed integer linear optimization problem, which we solve using the branch and bound framework. An algorithm of this style divides a search space to explore it recursively (branch) and obtains optimality bounds by solving linear relaxations in such sub-spaces (bound). To specify an algorithm, one must set several pa- rameters, such as how to explore search spaces, how to divide a search space once it has been explored, or how to tighten these linear relaxations. These policies can significantly influence resolution performance. This work focuses on a novel method for deriving a search policy, that is, a rule for select- ing the next sub-space to explore given a current partitioning, using deep machine learning. First, we collect data summarizing which subspaces contain the optimum, and which do not. By representing these sub-spaces as bipartite graphs encoding their characteristics, we train a graph neural network to determine the probability that a subspace contains the optimal so- lution by supervised learning. The choice of such design is particularly useful as the machine learning model can automatically adapt to problems of different sizes without modifications. We show that our approach beats the one of our competitors, consisting of simpler machine learning models trained from solver statistics, as well as the default policy of SCIP, a state- of-the-art open-source solver, on three NP-hard benchmarks: generalized independent set, fixed-charge multicommodity network flow, and maximum satisfiability problems

    Decision-Focused Learning: Foundations, State of the Art, Benchmark and Future Opportunities

    Full text link
    Decision-focused learning (DFL) is an emerging paradigm in machine learning which trains a model to optimize decisions, integrating prediction and optimization in an end-to-end system. This paradigm holds the promise to revolutionize decision-making in many real-world applications which operate under uncertainty, where the estimation of unknown parameters within these decision models often becomes a substantial roadblock. This paper presents a comprehensive review of DFL. It provides an in-depth analysis of the various techniques devised to integrate machine learning and optimization models, introduces a taxonomy of DFL methods distinguished by their unique characteristics, and conducts an extensive empirical evaluation of these methods proposing suitable benchmark dataset and tasks for DFL. Finally, the study provides valuable insights into current and potential future avenues in DFL research.Comment: Experimental Survey and Benchmarkin

    A rule-based method for scalable and traceable evaluation of system architectures

    Get PDF
    Despite the development of a variety of decision-aid tools for assessing the value of a conceptual design, humans continue to play a dominant role in this process. Researchers have identified two major challenges to automation, namely the subjectivity of value and the existence of multiple and conflicting customer needs. A third challenge is however arising as the amount of data (e.g., expert judgment, requirements, and engineering models) required to assess value increases. This brings two challenges. First, it becomes harder to modify existing knowledge or add new knowledge into the knowledge base. Second, it becomes harder to trace the results provided by the tool back to the design variables and model parameters. Current tools lack the scalability and traceability required to tackle these knowledge-intensive design evaluation problems. This work proposes a traceable and scalable rule-based architecture evaluation tool called VASSAR that is especially tailored to tackle knowledge-intensive problems that can be formulated as configuration design problems, which is demonstrated using the conceptual design task for a laptop. The methodology has three main steps. First, facts containing the capabilities and performance of different architectures are computed using rules containing physical and logical models. Second, capabilities are compared with requirements to assess satisfaction of each requirement. Third, requirement satisfaction is aggregated to yield a manageable number of metrics. An explanation facility keeps track of the value chain all along this process. This paper describes the methodology in detail and discusses in particular different implementations of preference functions as logical rules. A full-scale example around the design of Earth observing satellites is presented

    A Comprehensive Survey on Particle Swarm Optimization Algorithm and Its Applications

    Get PDF
    Particle swarm optimization (PSO) is a heuristic global optimization method, proposed originally by Kennedy and Eberhart in 1995. It is now one of the most commonly used optimization techniques. This survey presented a comprehensive investigation of PSO. On one hand, we provided advances with PSO, including its modifications (including quantum-behaved PSO, bare-bones PSO, chaotic PSO, and fuzzy PSO), population topology (as fully connected, von Neumann, ring, star, random, etc.), hybridization (with genetic algorithm, simulated annealing, Tabu search, artificial immune system, ant colony algorithm, artificial bee colony, differential evolution, harmonic search, and biogeography-based optimization), extensions (to multiobjective, constrained, discrete, and binary optimization), theoretical analysis (parameter selection and tuning, and convergence analysis), and parallel implementation (in multicore, multiprocessor, GPU, and cloud computing forms). On the other hand, we offered a survey on applications of PSO to the following eight fields: electrical and electronic engineering, automation control systems, communication theory, operations research, mechanical engineering, fuel and energy, medicine, chemistry, and biology. It is hoped that this survey would be beneficial for the researchers studying PSO algorithms

    Universal Smart Grid Agent for Distributed Power Generation Management

    Get PDF
    "Somewhere, there is always wind blowing or the sun shining." This maxim could lead the global shift from fossil to renewable energy sources, suggesting that there is enough energy available to be turned into electricity. But the already impressive numbers that are available today, along with the European Union's 20-20-20 goal – to power 20% of the EU energy consumption from renewables until 2020 –, might mislead us over the problem that the go-to renewables readily available rely on a primary energy source mankind cannot control: the weather. At the same time, the notion of the smart grid introduces a vast array of new data coming from sensors in the power grid, at wind farms, power plants, transformers, and consumers. The new wealth of information might seem overwhelming, but can help to manage the different actors in the power grid. This book proposes to view the problem of power generation and distribution in the face of increased volatility as a problem of information distribution and processing. It enhances the power grid by turning its nodes into agents that forecast their local power balance from historical data, using artificial neural networks and the multi-part evolutionary training algorithm described in this book. They pro-actively communicate power demand and supply, adhering to a set of behavioral rules this book defines, and finally solve the 0-1 knapsack problem of choosing offers in such a way that not only solves the disequilibrium, but also minimizes line loss, by elegant modeling in the Boolean domain. The book shows that the Divide-et-Impera approach of a distributed grid control can lead to an efficient, reliable integration of volatile renewable energy sources into the power grid

    Operational Research: Methods and Applications

    Get PDF
    Throughout its history, Operational Research has evolved to include a variety of methods, models and algorithms that have been applied to a diverse and wide range of contexts. This encyclopedic article consists of two main sections: methods and applications. The first aims to summarise the up-to-date knowledge and provide an overview of the state-of-the-art methods and key developments in the various subdomains of the field. The second offers a wide-ranging list of areas where Operational Research has been applied. The article is meant to be read in a nonlinear fashion. It should be used as a point of reference or first-port-of-call for a diverse pool of readers: academics, researchers, students, and practitioners. The entries within the methods and applications sections are presented in alphabetical order. The authors dedicate this paper to the 2023 Turkey/Syria earthquake victims. We sincerely hope that advances in OR will play a role towards minimising the pain and suffering caused by this and future catastrophes

    Compact Dynamic Optimisation Algorithm

    Get PDF
    In recent years, the field of evolutionary dynamic optimisation has seen significant increase in scientific developments and contributions. This is as a result of its relevance in solving academic and real-world problems. Several techniques such as hyper-mutation, hyper-learning, hyper-selection, change detection and many more have been developed specifically for solving dynamic optimisation problems. However, the complex structure of algorithms employing these techniques make them unsuitable for real-world, real-time dynamic optimisation problem using embedded systems with limited memory. The work presented in this thesis focuses on a compact approach as an alternative to population based optimisation algorithm, suitable for solving real-time dynamic optimisation problems. Specifically, a novel compact dynamic optimisation algorithm suitable for embedded systems with limited memory is presented. Three novel dynamic approaches that augment and enhance the evolving properties of the compact genetic algorithm in dynamic environments are introduced. These are 1.) change detection scheme that measures the degree of dynamic change 2.) mutation schemes whereby the mutation rates is directly linked to the detected degree of change and 3.) change trend scheme the monitors change pattern exhibited by the system. The novel compact dynamic optimization algorithm outlined was applied to two differing dynamic optimization problems. This work evaluates the algorithm in the context of tuning a controller for a physical target system in a dynamic environment and solving a dynamic optimization problem using an artificial dynamic environment generator. The novel compact dynamic optimisation algorithm was compared to some existing dynamic optimisation techniques. Through a series of experiments, it was shown that maintaining diversity at a population level is more efficient than diversity at an individual level. Among the five variants of the novel compact dynamic optimization algorithm, the third variant showed the best performance in terms of response to dynamic changes and solution quality. Furthermore, it was demonstrated that information transfer based on dynamic change patterns can effectively minimize the exploration/exploitation dilemma in a dynamic environment

    2014 Annual Research Symposium Abstract Book

    Get PDF
    2014 annual volume of abstracts for science research projects conducted by students at Trinity College

    Resource Management in Distributed Camera Systems

    Get PDF
    The aim of this work is to investigate different methods to solve the problem of allocating the correct amount of resources (network bandwidth and storage space) to video camera systems. Here we explore the intersection between two research areas: automatic control and game theory. Camera systems are a good example of the emergence of the Internet of Things (IoT) and its impact on our daily lives and the environment. We aim to improve today’s systems, shift from resources over-provisioning to allocate dynamically resources where they are needed the most. We optimize the storage and bandwidth allocation of camera systems to limit the impact on the environment as well as provide the best visual quality attainable with the resource limitations. This thesis is written as a collection of papers. It begins by introducing the problem with today’s camera systems, and continues with background information about resource allocation, automatic control and game theory. The third chapter de- scribes the models of the considered systems, their limitations and challenges. It then continues by providing more background on the automatic control and game theory techniques used in the proposed solutions. Finally, the proposed solutions are provided in five papers.Paper I proposes an approach to estimate the amount of data needed by surveillance cameras given camera and scenario parameters. This model is used for calculating the quasi Worst-Case Transmission Times of videos over a network. Papers II and III apply control concepts to camera network storage and bandwidth assignment. They provide simple, yet elegant solutions to the allocation of these resources in distributed camera systems. Paper IV com- bines pricing theory with control techniques to force the video quality of cam- era systems to converge to a common value based solely on the compression parameter of the provided videos. Paper V uses the VCG auction mechanism to solve the storage space allocation problem in competitive camera systems. It allows for a better system-wide visual quality than a simple split allocation given the limited system knowledge, trust and resource constraints
    • …
    corecore