6 research outputs found

    Distributed Particle Swarm Optimization using Optimal Computing Budget Allocation for Multi-Robot Learning

    Get PDF
    Particle Swarm Optimization (PSO) is a population-based metaheuristic that can be applied to optimize controllers for multiple robots using only local information. In order to cope with noise in the robotic performance evaluations, different re-evaluation strategies were proposed in the past. In this article, we apply a statistical technique called Optimal Computing Budget Allocation to improve the performance of distributed PSO in the presence of noise. In particular, we compare a distributed PSO OCBA algorithm suitable for resource-constrained mobile robots with a centralized version that uses global information for the allocation. We show that the distributed PSO OCBA outperforms a previous distributed noise-resistant PSO variant, and that the performance of the distributed PSO OCBA approaches that of the centralized one as the communication radius is increased. We also explore different parametrizations of the PSO OCBA algorithm, and show that the choice of parameter values differs from previous guidelines proposed for stand-alone OCBA

    Optimal computing budget allocation for small computing budgets

    No full text
    In this paper, we develop an optimal computing budget allocation (OCBA) algorithm for selecting a subset of designs under the restriction of an extremely small computing budget. Such an algorithm is useful in population based Evolutionary Algorithms (EA) and other applications that seek an elite subset of designs

    Optimal Budget-Constrained Sample Allocation for Selection Decisions with Multiple Uncertain Attributes

    Get PDF
    A decision-maker, when faced with a limited and fixed budget to collect data in support of a multiple attribute selection decision, must decide how many samples to observe from each alternative and attribute. This allocation decision is of particular importance when the information gained leads to uncertain estimates of the attribute values as with sample data collected from observations such as measurements, experimental evaluations, or simulation runs. For example, when the U.S. Department of Homeland Security must decide upon a radiation detection system to acquire, a number of performance attributes are of interest and must be measured in order to characterize each of the considered systems. We identified and evaluated several approaches to incorporate the uncertainty in the attribute value estimates into a normative model for a multiple attribute selection decision. Assuming an additive multiple attribute value model, we demonstrated the idea of propagating the attribute value uncertainty and describing the decision values for each alternative as probability distributions. These distributions were used to select an alternative. With the goal of maximizing the probability of correct selection we developed and evaluated, under several different sets of assumptions, procedures to allocate the fixed experimental budget across the multiple attributes and alternatives. Through a series of simulation studies, we compared the performance of these allocation procedures to the simple, but common, allocation procedure that distributed the sample budget equally across the alternatives and attributes. We found the allocation procedures that were developed based on the inclusion of decision-maker knowledge, such as knowledge of the decision model, outperformed those that neglected such information. Beginning with general knowledge of the attribute values provided by Bayesian prior distributions, and updating this knowledge with each observed sample, the sequential allocation procedure performed particularly well. These observations demonstrate that managing projects focused on a selection decision so that the decision modeling and the experimental planning are done jointly, rather than in isolation, can improve the overall selection results

    Distributed Multi-Robot Learning using Particle Swarm Optimization

    Get PDF
    This thesis studies the automatic design and optimization of high-performing robust controllers for mobile robots using exclusively on-board resources. Due to the often large parameter space and noisy performance metrics, this constitutes an expensive optimization problem. Population-based learning techniques have been proven to be effective in dealing with noise and are thus promising tools to approach this problem. We focus this research on the Particle Swarm Optimization (PSO) algorithm, which, in addition to dealing with noise, allows a distributed implementation, speeding up the optimization process and adding robustness to failure of individual agents. In this thesis, we systematically analyze the different variables that affect the learning process for a multi-robot obstacle avoidance benchmark. These variables include algorithmic parameters, controller architecture, and learning and testing environments. The analysis is performed on experimental setups of increasing evaluation time and complexity: numerical benchmark functions, high-fidelity simulations, and experiments with real robots. Based on this analysis, we apply the distributed PSO framework to learn a more complex, collaborative task: flocking. This attempt to learn a collaborative task in a distributed manner on a large parameter space is, to our knowledge, the first of such kind. In addition, we address the problem of noisy performance evaluations encountered in these robotic tasks and present a %new distributed PSO algorithm for dealing with noise suitable for resource-constrained mobile robots due to its low requirements in terms of memory and limited local communication

    Population Statistics for Particle Swarm Optimization on Problems Subject to Noise

    No full text
    Particle Swarm Optimization (PSO) is a metaheuristic where a swarm of particles explores the search space of an optimization problem to find good solutions. However, if the problem is subject to noise, the quality of the resulting solutions significantly deteriorates. The literature has attributed such a deterioration to particles suffering from inaccurate memories and from the incorrect selection of their neighborhood best solutions. For both cases, the incorporation of noise mitigation mechanisms has improved the quality of the results, but the analyses beyond such improvements often fall short of empirical evidence supporting their claims in terms other than the quality of the results. Furthermore, there is not even evidence showing the extent to which inaccurate memories and incorrect selection affect the particles in the swarm. Therefore, the performance of PSO on noisy optimization problems remains largely unexplored. The overall goal of this thesis is to study the effect of noise on PSO beyond the known deterioration of its results in order to develop more efficient noise mitigation mechanisms. Based on the allocation of function evaluations by the noise mitigation mechanisms, we distinguish three groups of PSO algorithms as: single-evaluation, which sacrifice the accuracy of the objective values over performing more iterations; resampling-based, which sacrifice performing more iterations over better estimating the objective values; and hybrids, which merge methods from the previous two. With an empirical approach, we study and analyze the performance of existing and new PSO algorithms from each group on 20 large-scale benchmark functions subject to different levels of multiplicative Gaussian noise. Throughout the search process, we compute a set of 16 population statistics that measure different characteristics of the swarms and provide useful information that we utilize to design better PSO algorithms. Our study identifies and defines deception, blindness and disorientation as three conditions from which particles suffer in noisy optimization problems. The population statistics for different PSO algorithms reveal that particles often suffer from large proportions of deception, blindness and disorientation, and show that reducing these three conditions would lead to better results. The sensitivity of PSO to noisy optimization problems is confirmed and highlights the importance of noise mitigation mechanisms. The population statistics for single-evaluation PSO algorithms show that the commonly used evaporation mechanism produces too much disorientation, leading to divergent behaviour and to the worst results within the group. Two better algorithms are designed, the first utilizes probabilistic updates to reduce disorientation, and the second computes a centroid solution as the neighborhood best solution to reduce deception. The population statistics for resampling-based PSO algorithms show that basic resampling still leads to large proportions of deception and blindness, and its results are the worst within the group. Two better algorithms are designed to reduce deception and blindness. The first provides better estimates of the personal best solutions, and the second provides even better estimates of a few solutions from which the neighborhood best solutions are selected. However, an existing PSO algorithm is the best within the group as it strives to asymptotically minimize deception by sequentially reducing both blindness and disorientation. The population statistics for hybrid PSO algorithms show that they provide the best results thanks to a combined reduction of deception, blindness and disorientation. Amongst the hybrids, we find a promising algorithm whose simplicity, flexibility and quality of results questions the importance of overly complex methods designed to minimize deception. Overall, our research presents a thorough study to design, evaluate and tune PSO algorithms to address optimization problems subject to noise
    corecore