8,319 research outputs found

    On the evolutionary optimisation of many conflicting objectives

    Get PDF
    This inquiry explores the effectiveness of a class of modern evolutionary algorithms, represented by Non-dominated Sorting Genetic Algorithm (NSGA) components, for solving optimisation tasks with many conflicting objectives. Optimiser behaviour is assessed for a grid of mutation and recombination operator configurations. Performance maps are obtained for the dual aims of proximity to, and distribution across, the optimal trade-off surface. Performance sweet-spots for both variation operators are observed to contract as the number of objectives is increased. Classical settings for recombination are shown to be suitable for small numbers of objectives but correspond to very poor performance for higher numbers of objectives, even when large population sizes are used. Explanations for this behaviour are offered via the concepts of dominance resistance and active diversity promotion

    Optimization Under Uncertainty Using the Generalized Inverse Distribution Function

    Full text link
    A framework for robust optimization under uncertainty based on the use of the generalized inverse distribution function (GIDF), also called quantile function, is here proposed. Compared to more classical approaches that rely on the usage of statistical moments as deterministic attributes that define the objectives of the optimization process, the inverse cumulative distribution function allows for the use of all the possible information available in the probabilistic domain. Furthermore, the use of a quantile based approach leads naturally to a multi-objective methodology which allows an a-posteriori selection of the candidate design based on risk/opportunity criteria defined by the designer. Finally, the error on the estimation of the objectives due to the resolution of the GIDF will be proven to be quantifiableComment: 20 pages, 25 figure

    Simultaneous Optimal Uncertainty Apportionment and Robust Design Optimization of Systems Governed by Ordinary Differential Equations

    Get PDF
    The inclusion of uncertainty in design is of paramount practical importance because all real-life systems are affected by it. Designs that ignore uncertainty often lead to poor robustness, suboptimal performance, and higher build costs. Treatment of small geometric uncertainty in the context of manufacturing tolerances is a well studied topic. Traditional sequential design methodologies have recently been replaced by concurrent optimal design methodologies where optimal system parameters are simultaneously determined along with optimally allocated tolerances; this allows to reduce manufacturing costs while increasing performance. However, the state of the art approaches remain limited in that they can only treat geometric related uncertainties restricted to be small in magnitude. This work proposes a novel framework to perform robust design optimization concurrently with optimal uncertainty apportionment for dynamical systems governed by ordinary differential equations. The proposed framework considerably expands the capabilities of contemporary methods by enabling the treatment of both geometric and non-geometric uncertainties in a unified manner. Additionally, uncertainties are allowed to be large in magnitude and the governing constitutive relations may be highly nonlinear. In the proposed framework, uncertainties are modeled using Generalized Polynomial Chaos and are solved quantitatively using a least-square collocation method. The computational efficiency of this approach allows statistical moments of the uncertain system to be explicitly included in the optimization-based design process. The framework formulates design problems as constrained multi-objective optimization problems, thus enabling the characterization of a Pareto optimal trade-off curve that is off-set from the traditional deterministic optimal trade-off curve. The Pareto off-set is shown to be a result of the additional statistical moment information formulated in the objective and constraint relations that account for the system uncertainties. Therefore, the Pareto trade-off curve from the new framework characterizes the entire family of systems within the probability space; consequently, designers are able to produce robust and optimally performing systems at an optimal manufacturing cost. A kinematic tolerance analysis case-study is presented first to illustrate how the proposed methodology can be applied to treat geometric tolerances. A nonlinear vehicle suspension design problem, subject to parametric uncertainty, illustrates the capability of the new framework to produce an optimal design at an optimal manufacturing cost, accounting for the entire family of systems within the associated probability space. This case-study highlights the general nature of the new framework which is capable of optimally allocating uncertainties of multiple types and with large magnitudes in a single calculation

    Fidelity Between Unitary Operators and the Generation of Gates Robust Against Off-Resonance Perturbations

    Full text link
    We perform a functional expansion of the fidelity between two unitary matrices in order to find the necessary conditions for the robust implementation of a target gate. Comparison of these conditions with those obtained from the Magnus expansion and Dyson series shows that they are equivalent in first order. By exploiting techniques from robust design optimization, we account for issues of experimental feasibility by introducing an additional criterion to the search for control pulses. This search is accomplished by exploring the competition between the multiple objectives in the implementation of the NOT gate by means of evolutionary multi-objective optimization

    Symbolic macromodeling of parameterized S-parameter frequency responses

    Get PDF
    This paper presents an evolutionary algorithm for symbolic macromodeling of parameterized frequency responses. The method does not require an a priori specification of the multivariate functional form or complexity of the model. Numerical results are shown to illustrate the performance of the technique

    PasMoQAP: A Parallel Asynchronous Memetic Algorithm for solving the Multi-Objective Quadratic Assignment Problem

    Full text link
    Multi-Objective Optimization Problems (MOPs) have attracted growing attention during the last decades. Multi-Objective Evolutionary Algorithms (MOEAs) have been extensively used to address MOPs because are able to approximate a set of non-dominated high-quality solutions. The Multi-Objective Quadratic Assignment Problem (mQAP) is a MOP. The mQAP is a generalization of the classical QAP which has been extensively studied, and used in several real-life applications. The mQAP is defined as having as input several flows between the facilities which generate multiple cost functions that must be optimized simultaneously. In this study, we propose PasMoQAP, a parallel asynchronous memetic algorithm to solve the Multi-Objective Quadratic Assignment Problem. PasMoQAP is based on an island model that structures the population by creating sub-populations. The memetic algorithm on each island individually evolve a reduced population of solutions, and they asynchronously cooperate by sending selected solutions to the neighboring islands. The experimental results show that our approach significatively outperforms all the island-based variants of the multi-objective evolutionary algorithm NSGA-II. We show that PasMoQAP is a suitable alternative to solve the Multi-Objective Quadratic Assignment Problem.Comment: 8 pages, 3 figures, 2 tables. Accepted at Conference on Evolutionary Computation 2017 (CEC 2017

    Racing Multi-Objective Selection Probabilities

    Get PDF
    In the context of Noisy Multi-Objective Optimization, dealing with uncertainties requires the decision maker to define some preferences about how to handle them, through some statistics (e.g., mean, median) to be used to evaluate the qualities of the solutions, and define the corresponding Pareto set. Approximating these statistics requires repeated samplings of the population, drastically increasing the overall computational cost. To tackle this issue, this paper proposes to directly estimate the probability of each individual to be selected, using some Hoeffding races to dynamically assign the estimation budget during the selection step. The proposed racing approach is validated against static budget approaches with NSGA-II on noisy versions of the ZDT benchmark functions

    CHARACTERIZING UNCERTAIN OUTCOMES WITH THE RESTRICTED HT TRANSFORMATION

    Get PDF
    Restrictions on the hyperbolic trigonometric (HT) transformation are imposed to guarantee that a probability density function is obtained from the maximum likelihood estimation. Performance of the restricted HT transformation using data generated from normal, beta, gamma, logistic, log-normal, Pareto, Weibull, order statistic, and bimodal populations is investigated via sampling experiments. Results suggest that the restricted HT transformaltion is sufficiently flexible to compete with the actual population distributions in most cases. Application of the restricted HT transformation is illustrated by characterizing uncertain net income per acre for community-supported agriculture farms in the northeastern United States.farm management, hyperbolic trigonometric transformation, uncertainty, Farm Management, C2, Q1,
    • …
    corecore