324 research outputs found

    Investigating Normalization Bounds for Hypervolume-Based Infill Criterion for Expensive Multiobjective Optimization

    Full text link
    While solving expensive multi-objective optimization problems, there may be stringent limits on the number of allowed function evaluations. Surrogate models are commonly used for such problems where calls to surrogates are made in lieu of calls to the true objective functions. The surrogates can also be used to identify infill points for evaluation, i.e., solutions that maximize certain performance criteria. One such infill criteria is the maximization of predicted hypervolume, which is the focus of this study. In particular, we are interested in investigating if better estimate of the normalization bounds could help in improving the performance of the surrogate assisted optimization algorithm. Towards this end, we propose a strategy to identify a better ideal point than the one that exists in the current archive. Numerical experiments are conducted on a range of problems to test the efficacy of the proposed method. The approach outperforms conventional forms of normalization in some cases, while providing comparable results for others. We provide critical insights on the search behavior and relate them with the underlying properties of the test problems

    Understanding hypervolume behavior theoretically for benchmarking in evolutionary multi/many-objective optimization

    Full text link
    Hypervolume (HV) is one of the most commonly used metrics for evaluating the Pareto front (PF) approximations generated by multiobjective evolutionary algorithms. Even so, HV is a resultant of a complex interplay between the PF shape, number of objectives, and user-specified reference points which, if not well understood, may lead to misinformed inferences about benchmarking performance. In order to understand this behavior, some previous studies have investigated such interactions empirically. In this letter, a new and unconventional approach is taken for gaining further insights about HV behavior. The key idea is to develop theoretical formulas for certain linear (equilateral simplex) and quadratic (orthant) PFs in two specific orientations: 1) regular and 2) inverted. These PFs represent a large number of problems in the existing DTLZ and WFG suites commonly used for benchmarking. The numerical experiments are presented to demonstrate the utility of the proposed work in benchmarking, and in understanding the contributions of the different regions of the PFs, such as corners, edges, as well explaining the contrast between the HV behaviors for regular versus inverted PFs. This letter provides a foundation and computationally fast means to undertake parametric studies to understand various aspects of HV

    Closed-loop automatic gradient design for liquid chromatography using Bayesian optimization

    Get PDF
    Contemporary complex samples require sophisticated methods for full analysis. This work describes the development of a Bayesian optimization algorithm for automated and unsupervised development of gradient programs. The algorithm was tailored to LC using a Gaussian process model with a novel covariance kernel. To facilitate unsupervised learning, the algorithm was designed to interface directly with the chromatographic system. Single-objective and multi-objective Bayesian optimization strategies were investigated for the separation of two complex (n>18, and n>80) dye mixtures. Both approaches found satisfactory optima in under 35 measurements. The multi-objective strategy was found to be powerful and flexible in terms of exploring the Pareto front. The performance difference between the single-objective and multi-objective strategy was further investigated using a retention modeling example. One additional advantage of the multi-objective approach was that it allows for a trade-off to be made between multiple objectives without prior knowledge. In general, the Bayesian optimization strategy was found to be particularly suitable, but not limited to, cases where retention modelling is not possible, although its scalability might be limited in terms of the number of parameters that can be simultaneously optimized
    • …
    corecore