755 research outputs found

    Cooperative Control for Multiple Autonomous Vehicles Using Descriptor Functions

    Get PDF
    The paper presents a novel methodology for the control management of a swarm of autonomous vehicles. The vehicles, or agents, may have different skills, and be employed for different missions. The methodology is based on the definition of descriptor functions that model the capabilities of the single agent and each task or mission. The swarm motion is controlled by minimizing a suitable norm of the error between agents’ descriptor functions and other descriptor functions which models the entire mission. The validity of the proposed technique is tested via numerical simulation, using different task assignment scenarios

    An overview of population-based algorithms for multi-objective optimisation

    Get PDF
    In this work we present an overview of the most prominent population-based algorithms and the methodologies used to extend them to multiple objective problems. Although not exact in the mathematical sense, it has long been recognised that population-based multi-objective optimisation techniques for real-world applications are immensely valuable and versatile. These techniques are usually employed when exact optimisation methods are not easily applicable or simply when, due to sheer complexity, such techniques could potentially be very costly. Another advantage is that since a population of decision vectors is considered in each generation these algorithms are implicitly parallelisable and can generate an approximation of the entire Pareto front at each iteration. A critique of their capabilities is also provided

    Parallel surrogate-assisted global optimization with expensive functions – a survey

    Get PDF
    Surrogate assisted global optimization is gaining popularity. Similarly, modern advances in computing power increasingly rely on parallelization rather than faster processors. This paper examines some of the methods used to take advantage of parallelization in surrogate based global optimization. A key issue focused on in this review is how different algorithms balance exploration and exploitation. Most of the papers surveyed are adaptive samplers that employ Gaussian Process or Kriging surrogates. These allow sophisticated approaches for balancing exploration and exploitation and even allow to develop algorithms with calculable rate of convergence as function of the number of parallel processors. In addition to optimization based on adaptive sampling, surrogate assisted parallel evolutionary algorithms are also surveyed. Beyond a review of the present state of the art, the paper also argues that methods that provide easy parallelization, like multiple parallel runs, or methods that rely on population of designs for diversity deserve more attention.United States. Dept. of Energy (National Nuclear Security Administration. Advanced Simulation and Computing Program. Cooperative Agreement under the Predictive Academic Alliance Program. DE-NA0002378

    Biochemical parameter estimation vs. benchmark functions: A comparative study of optimization performance and representation design

    Get PDF
    © 2019 Elsevier B.V. Computational Intelligence methods, which include Evolutionary Computation and Swarm Intelligence, can efficiently and effectively identify optimal solutions to complex optimization problems by exploiting the cooperative and competitive interplay among their individuals. The exploration and exploitation capabilities of these meta-heuristics are typically assessed by considering well-known suites of benchmark functions, specifically designed for numerical global optimization purposes. However, their performances could drastically change in the case of real-world optimization problems. In this paper, we investigate this issue by considering the Parameter Estimation (PE) of biochemical systems, a common computational problem in the field of Systems Biology. In order to evaluate the effectiveness of various meta-heuristics in solving the PE problem, we compare their performance by considering a set of benchmark functions and a set of synthetic biochemical models characterized by a search space with an increasing number of dimensions. Our results show that some state-of-the-art optimization methods – able to largely outperform the other meta-heuristics on benchmark functions – are characterized by considerably poor performances when applied to the PE problem. We also show that a limiting factor of these optimization methods concerns the representation of the solutions: indeed, by means of a simple semantic transformation, it is possible to turn these algorithms into competitive alternatives. We corroborate this finding by performing the PE of a model of metabolic pathways in red blood cells. Overall, in this work we state that classic benchmark functions cannot be fully representative of all the features that make real-world optimization problems hard to solve. This is the case, in particular, of the PE of biochemical systems. We also show that optimization problems must be carefully analyzed to select an appropriate representation, in order to actually obtain the performance promised by benchmark results

    Cooperative Particle Swarm Optimization for Combinatorial Problems

    Get PDF
    A particularly successful line of research for numerical optimization is the well-known computational paradigm particle swarm optimization (PSO). In the PSO framework, candidate solutions are represented as particles that have a position and a velocity in a multidimensional search space. The direct representation of a candidate solution as a point that flies through hyperspace (i.e., Rn) seems to strongly predispose the PSO toward continuous optimization. However, while some attempts have been made towards developing PSO algorithms for combinatorial problems, these techniques usually encode candidate solutions as permutations instead of points in search space and rely on additional local search algorithms. In this dissertation, I present extensions to PSO that by, incorporating a cooperative strategy, allow the PSO to solve combinatorial problems. The central hypothesis is that by allowing a set of particles, rather than one single particle, to represent a candidate solution, combinatorial problems can be solved by collectively constructing solutions. The cooperative strategy partitions the problem into components where each component is optimized by an individual particle. Particles move in continuous space and communicate through a feedback mechanism. This feedback mechanism guides them in the assessment of their individual contribution to the overall solution. Three new PSO-based algorithms are proposed. Shared-space CCPSO and multispace CCPSO provide two new cooperative strategies to split the combinatorial problem, and both models are tested on proven NP-hard problems. Multimodal CCPSO extends these combinatorial PSO algorithms to efficiently sample the search space in problems with multiple global optima. Shared-space CCPSO was evaluated on an abductive problem-solving task: the construction of parsimonious set of independent hypothesis in diagnostic problems with direct causal links between disorders and manifestations. Multi-space CCPSO was used to solve a protein structure prediction subproblem, sidechain packing. Both models are evaluated against the provable optimal solutions and results show that both proposed PSO algorithms are able to find optimal or near-optimal solutions. The exploratory ability of multimodal CCPSO is assessed by evaluating both the quality and diversity of the solutions obtained in a protein sequence design problem, a highly multimodal problem. These results provide evidence that extended PSO algorithms are capable of dealing with combinatorial problems without having to hybridize the PSO with other local search techniques or sacrifice the concept of particles moving throughout a continuous search space

    A survey on metaheuristics for stochastic combinatorial optimization

    Get PDF
    Metaheuristics are general algorithmic frameworks, often nature-inspired, designed to solve complex optimization problems, and they are a growing research area since a few decades. In recent years, metaheuristics are emerging as successful alternatives to more classical approaches also for solving optimization problems that include in their mathematical formulation uncertain, stochastic, and dynamic information. In this paper metaheuristics such as Ant Colony Optimization, Evolutionary Computation, Simulated Annealing, Tabu Search and others are introduced, and their applications to the class of Stochastic Combinatorial Optimization Problems (SCOPs) is thoroughly reviewed. Issues common to all metaheuristics, open problems, and possible directions of research are proposed and discussed. In this survey, the reader familiar to metaheuristics finds also pointers to classical algorithmic approaches to optimization under uncertainty, and useful informations to start working on this problem domain, while the reader new to metaheuristics should find a good tutorial in those metaheuristics that are currently being applied to optimization under uncertainty, and motivations for interest in this fiel
    • 

    corecore