6 research outputs found

    New sampling strategies when searching for robust solutions

    Get PDF
    Many real-world optimisation problems involve un- certainties, and in such situations it is often desirable to identify robust solutions that perform well over the possible future scenarios. In this paper, we focus on input uncertainty, such as in manufacturing, where the actual manufactured product may differ from the specified design but should still function well. Estimating a solution’s expected fitness in such a case is challenging, especially if the fitness function is expensive to evaluate, and its analytic form is unknown. One option is to average over a number of scenarios, but this is computationally expensive. The archive sample approximation method reduces the required number of fitness evaluations by re-using previous evaluations stored in an archive. The main challenge in the application of this method lies in determining the locations of additional samples drawn in each generation to enrich the information in the archive and reduce the estimation error. In this paper, we use the Wasserstein distance metric to approximate the possible benefit of a potential sample location on the estimation error, and propose new sampling strategies based on this metric. Contrary to previous studies, we consider a sample’s contribution for the entire population, rather than inspecting each individual separately. This also allows us to dynamically adjust the number of samples to be collected in each generation. An empirical comparison with several previously proposed archive-based sample approximation methods demonstrates the superiority of our approaches

    Data science for engineering design: State of the art and future directions

    Get PDF
    Abstract Engineering design (ED) is the process of solving technical problems within requirements and constraints to create new artifacts. Data science (DS) is the inter-disciplinary field that uses computational systems to extract knowledge from structured and unstructured data. The synergies between these two fields have a long story and throughout the past decades, ED has increasingly benefited from an integration with DS. We present a literature review at the intersection between ED and DS, identifying the tools, algorithms and data sources that show the most potential in contributing to ED, and identifying a set of challenges that future data scientists and designers should tackle, to maximize the potential of DS in supporting effective and efficient designs. A rigorous scoping review approach has been supported by Natural Language Processing techniques, in order to offer a review of research across two fuzzy-confining disciplines. The paper identifies challenges related to the two fields of research and to their interfaces. The main gaps in the literature revolve around the adaptation of computational techniques to be applied in the peculiar context of design, the identification of data sources to boost design research and a proper featurization of this data. The challenges have been classified considering their impacts on ED phases and applicability of DS methods, giving a map for future research across the fields. The scoping review shows that to fully take advantage of DS tools there must be an increase in the collaboration between design practitioners and researchers in order to open new data driven opportunities

    Evolutionary multi-objective worst-case robust optimisation

    Get PDF
    Many real-world problems are subject to uncertainty, and often solutions should not only be good, but also robust against environmental disturbances or deviations from the decision variables. While most papers dealing with robustness aim at finding solutions with a high expected performance given a distribution of the uncertainty, we examine the trade-off between the allowed deviations from the decision variables (tolerance level), and the worst case performance given the allowed deviations. In this research work, we suggest two multi-objective evolutionary algorithms to compute the available trade-offs between allowed tolerance level and worst-case quality of the solutions, and the tolerance level is defined as robustness which could also be the variations from parameters. Both algorithms are 2-level nested algorithms. While the first algorithm is point-based in the sense that the lower level computes a point of worst case for each upper level solution, the second algorithm is envelope-based, in the sense that the lower level computes a whole trade-off curve between worst-case fitness and tolerance level for each upper level solution. Our problem can be considered as a special case of bi-level optimisation, which is computationally expensive, because each upper level solution is evaluated by calling a lower level optimiser. We propose and compare several strategies to improve the efficiency of both algorithms. Later, we also suggest surrogate-assisted algorithms to accelerate both algorithms

    Efficient information collection in stochastic optimisation

    Get PDF
    This thesis focuses on a class of information collection problems in stochastic optimisation. Algorithms in this area often need to measure the performances of several potential solutions, and use the collected information in their search for high-performance solutions, but only have a limited budget for measuring. A simple approach that allocates simulation time equally over all potential solutions may waste time in collecting additional data for the alternatives that can be quickly identified as non-promising. Instead, algorithms should amend their measurement strategy to iteratively examine the statistical evidences collected thus far and focus computational efforts on the most promising alternatives. This thesis develops new efficient methods of collecting information to be used in stochastic optimisation problems. First, we investigate an efficient measurement strategy used for the solution selection procedure of two-stage linear stochastic programs. In the solution selection procedure, finite computational resources must be allocated among numerous potential solutions to estimate their performances and identify the best solution. We propose a two-stage sampling approach that exploits a Wasserstein-based screening rule and an optimal computing budget allocation technique to improve the efficiency of obtaining a high-quality solution. Numerical results show our method provides good trade-offs between computational effort and solution performance. Then, we address the information collection problems that are encountered in the search for robust solutions. Specifically, we use an evolutionary strategy to solve a class of simulation optimisation problems with computationally expensive blackbox functions. We implement an archive sample approximation method to ix reduce the required number of evaluations. The main challenge in the application of this method is determining the locations of additional samples drawn in each generation to enrich the information in the archive and minimise the approximation error. We propose novel sampling strategies by using the Wasserstein metric to estimate the possible benefit of a potential sample location on the approximation error. An empirical comparison with several previously proposed archive-based sample approximation methods demonstrates the superiority of our approaches. In the final part of this thesis, we propose an adaptive sampling strategy for the rollout algorithm to solve the clinical trial scheduling and resource allocation problem under uncertainty. The proposed sampling strategy method exploits the variance reduction technique of common random numbers and the empirical Bernstein inequality in a statistical racing procedure, which can balance the exploration and exploitation of the rollout algorithm. Moreover, we present an augmented approach that utilises a heuristic-based grouping rule to enhance the simulation efficiency by breaking down the overall action selection problem into a selection problem involving small groups. The numerical results show that the proposed method provides competitive results within a reasonable amount of computational time

    Fast and Robust Design of CMOS VCO for Optimal Performance

    Get PDF
    The exponentially growing design complexity with technological advancement calls for a large scope in the analog and mixed signal integrated circuit design automation. In the automation process, performance optimization under different environmental constraints is of prime importance. The analog integrated circuits design strongly requires addressing multiple competing performance objectives for optimization with ability to find global solutions in a constrained environment. The integrated circuit (IC) performances are significantly affected by the device, interconnect and package parasitics. Inclusion of circuit parasitics in the design phase along with performance optimization has become a bare necessity for faster prototyping. Besides this, the fabrication process variations have a predominant effect on the circuit performance, which is directly linked to the acceptability of manufactured integrated circuit chips. This necessitates a manufacturing process tolerant design. The development of analog IC design methods exploiting the computational intelligence of evolutionary techniques for optimization, integrating the circuit parasitic in the design optimization process in a more meaningful way and developing process fluctuation tolerant optimal design is the central theme of this thesis. Evolutionary computing multi-objective optimization techniques such as Non-dominated Sorting Genetic Algorithm-II and Infeasibility Driven Evolutionary Algorithm are used in this thesis for the development of parasitic aware design techniques for analog ICs. The realistic physical and process constraints are integrated in the proposed design technique. A fast design methodology based on one of the efficient optimization technique is developed and an extensive worst case process variation analysis is performed. This work also presents a novel process corner variation aware analog IC design methodology, which would effectively increase the yield of chips in the acceptable performance window. The performance of all the presented techniques is demonstrated through the application to CMOS ring oscillators, current starved and xi differential voltage controlled oscillators, designed in Cadence Virtuoso Analog Design Environment

    Evolution strategies for robust optimization

    Get PDF
    Real-world (black-box) optimization problems often involve various types of uncertainties and noise emerging in different parts of the optimization problem. When this is not accounted for, optimization may fail or may yield solutions that are optimal in the classical strict notion of optimality, but fail in practice. Robust optimization is the practice of optimization that actively accounts for uncertainties and/or noise. Evolutionary Algorithms form a class of optimization algorithms that use the principle of evolution to find good solutions to optimization problems. Because uncertainty and noise are indispensable parts of nature, this class of optimization algorithms seems to be a logical choice for robust optimization scenarios. This thesis provides a clear definition of the term robust optimization and a comparison and practical guidelines on how Evolution Strategies, a subclass of Evolutionary Algorithms for real-parameter optimization problems, should be adapted for such scenarios.UBL - phd migration 201
    corecore