4,049 research outputs found

    Application of a new multi-agent Hybrid Co-evolution based Particle Swarm Optimisation methodology in ship design

    Get PDF
    In this paper, a multiple objective 'Hybrid Co-evolution based Particle Swarm Optimisation' methodology (HCPSO) is proposed. This methodology is able to handle multiple objective optimisation problems in the area of ship design, where the simultaneous optimisation of several conflicting objectives is considered. The proposed method is a hybrid technique that merges the features of co-evolution and Nash equilibrium with a ε-disturbance technique to eliminate the stagnation. The method also offers a way to identify an efficient set of Pareto (conflicting) designs and to select a preferred solution amongst these designs. The combination of co-evolution approach and Nash-optima contributes to HCPSO by utilising faster search and evolution characteristics. The design search is performed within a multi-agent design framework to facilitate distributed synchronous cooperation. The most widely used test functions from the formal literature of multiple objectives optimisation are utilised to test the HCPSO. In addition, a real case study, the internal subdivision problem of a ROPAX vessel, is provided to exemplify the applicability of the developed method

    Preliminary space mission design under uncertainty

    Get PDF
    This paper proposes a way to model uncertainties and to introduce them explicitly in the design process of a preliminary space mission. Traditionally, a system margin approach is used in order to take the min to account. In this paper, Evidence Theory is proposed to crystallise the inherent uncertainties. The design process is then formulated as an optimisation under uncertainties(OUU). Three techniques are proposed to solve the OUU problem: (a) an evolutionary multi-objective approach, (b) a step technique consisting of maximising the belief for different levels of performance, and (c) a clustering method that firstly identifies feasible regions.The three methods are applied to the Bepi Colombo mission and their effectiveness at solving the OUU problem are compared

    Global Trajectory Optimisation : Can We Prune the Solution Space When Considering Deep Space Manoeuvres? [Final Report]

    Get PDF
    This document contains a report on the work done under the ESA/Ariadna study 06/4101 on the global optimization of space trajectories with multiple gravity assist (GA) and deep space manoeuvres (DSM). The study was performed by a joint team of scientists from the University of Reading and the University of Glasgow

    Preliminary space mission design under uncertainty

    Get PDF
    This paper proposes a way to model uncertainties and to introduce them explicitly in the design process of a preliminary space mission. Traditionally, a system margin approach is used in order to take them into account. In this paper, Evidence Theory is proposed to crystallise the inherent uncertainties. The design process is then formulated as an Optimisation Under Uncertainties (OUU). Three techniques are proposed to solve the OUU problem: (a) an evolutionary multi-objective approach, (b) a step technique consisting of maximising the belief for different levels of performance, and (c) a clustering method that firstly identifes feasible regions. The three methods are applied to the BepiColombo mission and their effectiveness at solving the OUU problem are compared

    Stochastic Optimization in Econometric Models – A Comparison of GA, SA and RSG

    Get PDF
    This paper shows that, in case of an econometric model with a high sensitivity to data, using stochastic optimization algorithms is better than using classical gradient techniques. In addition, we showed that the Repetitive Stochastic Guesstimation (RSG) algorithm –invented by Charemza-is closer to Simulated Annealing (SA) than to Genetic Algorithms (GAs), so we produced hybrids between RSG and SA to study their joint behavior. The evaluation of all algorithms involved was performed on a short form of the Romanian macro model, derived from Dobrescu (1996). The subject of optimization was the model’s solution, as function of the initial values (in the first stage) and of the objective functions (in the second stage). We proved that a priori information help “elitist “ algorithms (like RSG and SA) to obtain best results; on the other hand, when one has equal believe concerning the choice among different objective functions, GA gives a straight answer. Analyzing the average related bias of the model’s solution proved the efficiency of the stochastic optimization methods presented.underground economy, Laffer curve, informal activity, fiscal policy, transitionmacroeconomic model, stochastic optimization, evolutionary algorithms, Repetitive Stochastic Guesstimation

    Robust Mission Design Through Evidence Theory and Multi-Agent Collaborative Search

    Full text link
    In this paper, the preliminary design of a space mission is approached introducing uncertainties on the design parameters and formulating the resulting reliable design problem as a multiobjective optimization problem. Uncertainties are modelled through evidence theory and the belief, or credibility, in the successful achievement of mission goals is maximised along with the reliability of constraint satisfaction. The multiobjective optimisation problem is solved through a novel algorithm based on the collaboration of a population of agents in search for the set of highly reliable solutions. Two typical problems in mission analysis are used to illustrate the proposed methodology

    A Survey on Compiler Autotuning using Machine Learning

    Full text link
    Since the mid-1990s, researchers have been trying to use machine-learning based approaches to solve a number of different compiler optimization problems. These techniques primarily enhance the quality of the obtained results and, more importantly, make it feasible to tackle two main compiler optimization problems: optimization selection (choosing which optimizations to apply) and phase-ordering (choosing the order of applying optimizations). The compiler optimization space continues to grow due to the advancement of applications, increasing number of compiler optimizations, and new target architectures. Generic optimization passes in compilers cannot fully leverage newly introduced optimizations and, therefore, cannot keep up with the pace of increasing options. This survey summarizes and classifies the recent advances in using machine learning for the compiler optimization field, particularly on the two major problems of (1) selecting the best optimizations and (2) the phase-ordering of optimizations. The survey highlights the approaches taken so far, the obtained results, the fine-grain classification among different approaches and finally, the influential papers of the field.Comment: version 5.0 (updated on September 2018)- Preprint Version For our Accepted Journal @ ACM CSUR 2018 (42 pages) - This survey will be updated quarterly here (Send me your new published papers to be added in the subsequent version) History: Received November 2016; Revised August 2017; Revised February 2018; Accepted March 2018

    A multi-objective performance optimisation framework for video coding

    Get PDF
    Digital video technologies have become an essential part of the way visual information is created, consumed and communicated. However, due to the unprecedented growth of digital video technologies, competition for bandwidth resources has become fierce. This has highlighted a critical need for optimising the performance of video encoders. However, there is a dual optimisation problem, wherein, the objective is to reduce the buffer and memory requirements while maintaining the quality of the encoded video. Additionally, through the analysis of existing video compression techniques, it was found that the operation of video encoders requires the optimisation of numerous decision parameters to achieve the best trade-offs between factors that affect visual quality; given the resource limitations arising from operational constraints such as memory and complexity. The research in this thesis has focused on optimising the performance of the H.264/AVC video encoder, a process that involved finding solutions for multiple conflicting objectives. As part of this research, an automated tool for optimising video compression to achieve an optimal trade-off between bit rate and visual quality, given maximum allowed memory and computational complexity constraints, within a diverse range of scene environments, has been developed. Moreover, the evaluation of this optimisation framework has highlighted the effectiveness of the developed solution

    Particle Swarm Optimization

    Get PDF
    Particle swarm optimization (PSO) is a population based stochastic optimization technique influenced by the social behavior of bird flocking or fish schooling.PSO shares many similarities with evolutionary computation techniques such as Genetic Algorithms (GA). The system is initialized with a population of random solutions and searches for optima by updating generations. However, unlike GA, PSO has no evolution operators such as crossover and mutation. In PSO, the potential solutions, called particles, fly through the problem space by following the current optimum particles. This book represents the contributions of the top researchers in this field and will serve as a valuable tool for professionals in this interdisciplinary field
    corecore