20 research outputs found

    Application of Permutation Genetic Algorithm for Sequential Model Building–Model Validation Design of Experiments

    Get PDF
    YesThe work presented in this paper is motivated by a complex multivariate engineering problem associated with engine mapping experiments, which require efficient Design of Experiment (DoE) strategies to minimise expensive testing. The paper describes the development and evaluation of a Permutation Genetic Algorithm (PermGA) to support an exploration-based sequential DoE strategy for complex real-life engineering problems. A known PermGA was implemented to generate uniform OLH DoEs, and substantially extended to support generation of Model Building–Model Validation (MB-MV) sequences, by generating optimal infill sets of test points as OLH DoEs, that preserve good space filling and projection properties for the merged MB + MV test plan. The algorithm was further extended to address issues with non-orthogonal design spaces, which is a common problem in engineering applications. The effectiveness of the PermGA algorithm for the MB-MV OLH DoE sequence was evaluated through a theoretical benchmark problem based on the Six-Hump-Camel-Back (SHCB) function, as well as the Gasoline Direct Injection (GDI) engine steady state engine mapping problem that motivated this research. The case studies show that the algorithm is effective at delivering quasi-orthogonal space-filling DoEs with good properties even after several MB-MV iterations, while the improvement in model adequacy and accuracy can be monitored by the engineering analyst. The practical importance of this work, demonstrated through the engine case study, also is that significant reduction in the effort and cost of testing can be achieved.The research work presented in this paper was funded by the UK Technology Strategy Board (TSB) through the Carbon Reduction through Engine Optimization (CREO) project

    What makes a problem hard for a genetic algorithm? Some anomalous results and their explanation

    Full text link
    What makes a problem easy or hard for a genetic algorithm (GA)? This question has become increasingly important as people have tried to apply the GA to ever more diverse types of problems. Much previous work on this question has studied the relationship between GA performance and the structure of a given fitness function when it is expressed as a Walsh polynomial . The work of Bethke, Goldberg, and others has produced certain theoretical results about this relationship. In this article we review these theoretical results, and then discuss a number of seemingly anomalous experimental results reported by Tanese concerning the performance of the GA on a subclass of Walsh polynomials, some members of which were expected to be easy for the GA to optimize. Tanese found that the GA was poor at optimizing all functions in this subclass, that a partitioning of a single large population into a number of smaller independent populations seemed to improve performance, and that hillelimbing outperformed both the original and partitioned forms of the GA on these functions. These results seemed to contradict several commonly held expectations about GAs.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/46892/1/10994_2004_Article_BF00993046.pd

    Caracol, Belize, and Changing Perceptions of Ancient Maya Society

    Full text link
    corecore