97 research outputs found

    Nash Equilibria, collusion in games and the coevolutionary particle swarm algorithm

    Get PDF
    In recent work, we presented a deterministic algorithm to investigate collusion between players in a game where the players’ payoff functions are subject to a variational inequality describing the equilibrium of a transportation system. In investigating the potential for collusion between players, the diagonalization algorithm returned a local optimum. In this paper, we apply a coevolutionary particle swarm optimization (PSO) algorithm developed in earlier research in an attempt to return the global maximum. A numerical experiment is used to verify the performance of the algorithm in overcoming local optimum

    A Brief Review on Mathematical Tools Applicable to Quantum Computing for Modelling and Optimization Problems in Engineering

    Get PDF
    Since its emergence, quantum computing has enabled a wide spectrum of new possibilities and advantages, including its efficiency in accelerating computational processes exponentially. This has directed much research towards completely novel ways of solving a wide variety of engineering problems, especially through describing quantum versions of many mathematical tools such as Fourier and Laplace transforms, differential equations, systems of linear equations, and optimization techniques, among others. Exploration and development in this direction will revolutionize the world of engineering. In this manuscript, we review the state of the art of these emerging techniques from the perspective of quantum computer development and performance optimization, with a focus on the most common mathematical tools that support engineering applications. This review focuses on the application of these mathematical tools to quantum computer development and performance improvement/optimization. It also identifies the challenges and limitations related to the exploitation of quantum computing and outlines the main opportunities for future contributions. This review aims at offering a valuable reference for researchers in fields of engineering that are likely to turn to quantum computing for solutions. Doi: 10.28991/ESJ-2023-07-01-020 Full Text: PD

    Co-evolutionary Hybrid Bi-level Optimization

    Get PDF
    Multi-level optimization stems from the need to tackle complex problems involving multiple decision makers. Two-level optimization, referred as ``Bi-level optimization'', occurs when two decision makers only control part of the decision variables but impact each other (e.g., objective value, feasibility). Bi-level problems are sequential by nature and can be represented as nested optimization problems in which one problem (the ``upper-level'') is constrained by another one (the ``lower-level''). The nested structure is a real obstacle that can be highly time consuming when the lower-level is NPhard\mathcal{NP}-hard. Consequently, classical nested optimization should be avoided. Some surrogate-based approaches have been proposed to approximate the lower-level objective value function (or variables) to reduce the number of times the lower-level is globally optimized. Unfortunately, such a methodology is not applicable for large-scale and combinatorial bi-level problems. After a deep study of theoretical properties and a survey of the existing applications being bi-level by nature, problems which can benefit from a bi-level reformulation are investigated. A first contribution of this work has been to propose a novel bi-level clustering approach. Extending the well-know ``uncapacitated k-median problem'', it has been shown that clustering can be easily modeled as a two-level optimization problem using decomposition techniques. The resulting two-level problem is then turned into a bi-level problem offering the possibility to combine distance metrics in a hierarchical manner. The novel bi-level clustering problem has a very interesting property that enable us to tackle it with classical nested approaches. Indeed, its lower-level problem can be solved in polynomial time. In cooperation with the Luxembourg Centre for Systems Biomedicine (LCSB), this new clustering model has been applied on real datasets such as disease maps (e.g. Parkinson, Alzheimer). Using a novel hybrid and parallel genetic algorithm as optimization approach, the results obtained after a campaign of experiments have the ability to produce new knowledge compared to classical clustering techniques combining distance metrics in a classical manner. The previous bi-level clustering model has the advantage that the lower-level can be solved in polynomial time although the global problem is by definition NP\mathcal{NP}-hard. Therefore, next investigations have been undertaken to tackle more general bi-level problems in which the lower-level problem does not present any specific advantageous properties. Since the lower-level problem can be very expensive to solve, the focus has been turned to surrogate-based approaches and hyper-parameter optimization techniques with the aim of approximating the lower-level problem and reduce the number of global lower-level optimizations. Adapting the well-know bayesian optimization algorithm to solve general bi-level problems, the expensive lower-level optimizations have been dramatically reduced while obtaining very accurate solutions. The resulting solutions and the number of spared lower-level optimizations have been compared to the bi-level evolutionary algorithm based on quadratic approximations (BLEAQ) results after a campaign of experiments on official bi-level benchmarks. Although both approaches are very accurate, the bi-level bayesian version required less lower-level objective function calls. Surrogate-based approaches are restricted to small-scale and continuous bi-level problems although many real applications are combinatorial by nature. As for continuous problems, a study has been performed to apply some machine learning strategies. Instead of approximating the lower-level solution value, new approximation algorithms for the discrete/combinatorial case have been designed. Using the principle employed in GP hyper-heuristics, heuristics are trained in order to tackle efficiently the NPhard\mathcal{NP}-hard lower-level of bi-level problems. This automatic generation of heuristics permits to break the nested structure into two separated phases: \emph{training lower-level heuristics} and \emph{solving the upper-level problem with the new heuristics}. At this occasion, a second modeling contribution has been introduced through a novel large-scale and mixed-integer bi-level problem dealing with pricing in the cloud, i.e., the Bi-level Cloud Pricing Optimization Problem (BCPOP). After a series of experiments that consisted in training heuristics on various lower-level instances of the BCPOP and using them to tackle the bi-level problem itself, the obtained results are compared to the ``cooperative coevolutionary algorithm for bi-level optimization'' (COBRA). Although training heuristics enables to \emph{break the nested structure}, a two phase optimization is still required. Therefore, the emphasis has been put on training heuristics while optimizing the upper-level problem using competitive co-evolution. Instead of adopting the classical decomposition scheme as done by COBRA which suffers from the strong epistatic links between lower-level and upper-level variables, co-evolving the solution and the mean to get to it can cope with these epistatic link issues. The ``CARBON'' algorithm developed in this thesis is a competitive and hybrid co-evolutionary algorithm designed for this purpose. In order to validate the potential of CARBON, numerical experiments have been designed and results have been compared to state-of-the-art algorithms. These results demonstrate that ``CARBON'' makes possible to address nested optimization efficiently

    Automated Telescience: Active Machine Learning Of Remote Dynamical Systems

    Full text link
    Automated science is an emerging field of research and technology that aims to extend the role of computers in science from a tool that stores and analyzes data to one that generates hypotheses and designs experiments. Despite the tremendous discoveries and advancements brought forth by the scientific method, it is a process that is fundamentally driven by human insight and ingenuity. Automated science aims to develop algorithms, protocols and design philosophies that are capable of automating the scientific process. This work presents advances the field of automated science and the specific contributions of this work fall into three categories: coevolutionary search methods and applications, inferring the underlying structure of dynamical systems, and remote controlled automated science. First, a collection of coevolutionary search methods and applications are presented. These approaches include: a method to reduce the computational overhead of evolutionary algorithms via trainer selection strategies in a rank predictor framework, an approach for optimal experiment design for nonparametric models using Shannon information, and an application of coevolutionary algorithms to infer kinematic poses from RGBD images. Second, three algorithms are presented that infer the underlying structure of dynamical systems: a method to infer discrete-continuous hybrid dynamical systems from unlabeled data, an approach to discovering ordinary differential equations of arbitrary order, and a principle to uncover the existence and dynamics of hidden state variables that correspond to physical quantities from nonlinear differential equations. All of these algorithms are able to uncover structure in an unsupervised manner without any prior domain knowledge. Third, a remote controlled, distributed system is demonstrated to autonomously generate scientific models by perturbing and observing a system in an intelligent fashion. By automating the components of physical experimentation, scientific modeling and experimental design, models of luminescent chemical reactions and multi-compartmental pharmacokinetic systems were discovered without any human intervention, which illustrates how a set of distributed machines can contribute scientific knowledge while scaling beyond geographic constraints

    Analysis, design and optimization of offshore power system network

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    An Investigation of Factors Influencing Algorithm Selection for High Dimensional Continuous Optimisation Problems

    Get PDF
    The problem of algorithm selection is of great importance to the optimisation community, with a number of publications present in the Body-of-Knowledge. This importance stems from the consequences of the No-Free-Lunch Theorem which states that there cannot exist a single algorithm capable of solving all possible problems. However, despite this importance, the algorithm selection problem has of yet failed to gain widespread attention . In particular, little to no work in this area has been carried out with a focus on large-scale optimisation; a field quickly gaining momentum in line with advancements and influence of big data processing. As such, it is not as yet clear as to what factors, if any, influence the selection of algorithms for very high-dimensional problems (> 1000) - and it is entirely possible that algorithms that may not work well in lower dimensions may in fact work well in much higher dimensional spaces and vice-versa. This work therefore aims to begin addressing this knowledge gap by investigating some of these influencing factors for some common metaheuristic variants. To this end, typical parameters native to several metaheuristic algorithms are firstly tuned using the state-of-the-art automatic parameter tuner, SMAC. Tuning produces separate parameter configurations of each metaheuristic for each of a set of continuous benchmark functions; specifically, for every algorithm-function pairing, configurations are found for each dimensionality of the function from a geometrically increasing scale (from 2 to 1500 dimensions). The nature of this tuning is therefore highly computationally expensive necessitating the use of SMAC. Using these sets of parameter configurations, a vast amount of performance data relating to the large-scale optimisation of our benchmark suite by each metaheuristic was subsequently generated. From the generated data and its analysis, several behaviours presented by the metaheuristics as applied to large-scale optimisation have been identified and discussed. Further, this thesis provides a concise review of the relevant literature for the consumption of other researchers looking to progress in this area in addition to the large volume of data produced, relevant to the large-scale optimisation of our benchmark suite by the applied set of common metaheuristics. All work presented in this thesis was funded by EPSRC grant: EP/J017515/1 through the DAASE project
    corecore