1,588 research outputs found

    Bayesian optimization in adverse scenarios

    Get PDF
    Optimization problems with expensive-to-evaluate objective functions are ubiquitous in scientific and industrial settings. Bayesian optimization has gained widespread acclaim for optimizing expensive (and often black box) functions due to its theoretical performance guarantees and empirical sample efficiency in a variety of settings. Nevertheless, many practical scenarios remain where prevailing Bayesian optimization techniques fall short. We consider four such scenarios. First, we formalize the optimization problem where the goal is to identify robust designs with respect to multiple objective functions that are subject to input noise. Such robust design problems frequently arise, for example, in manufacturing settings where fabrication can only be performed with limited precision. We propose a method that identifies a set of optimal robust designs, where each design provides probabilistic guarantees jointly on multiple objectives. Second, we consider sample-efficient high-dimensional multi-objective optimization. This line of research is motivated by the challenging task of designing optical displays for augmented reality to optimize visual quality and efficiency, where the designs are specified by high-dimensional parameterizations governing complex geometries. Our proposed trust-region based algorithm yields order-of-magnitude improvements in sample complexity on this problem. Third, we consider multi-objective optimization of expensive functions with variable-cost, decoupled, and/or multi-fidelity evaluations and propose a Bayes-optimal, non-myopic acquisition function, which significantly improves sample efficiency in scenarios with incomplete information. We apply this to hardware-aware neural architecture search where the objective, on-device latency and model accuracy, can often be evaluated independently. Fourth, we consider the setting where the search space consists of discrete (and potentially continuous) parameters. We propose a theoretically grounded technique that uses a probabilistic reparameterization to transform the discrete or mixed inner optimization problem into a continuous one leading to more effective Bayesian optimization policies. Together, this thesis provides a playbook for Bayesian optimization in several practical adverse scenarios

    OPTIMAL COMPUTING BUDGET ALLOCATION FOR STOCHASTIC SIMULATION OPTIMIZATION

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Multi-surrogate Assisted Efficient Global Optimization for Discrete Problems

    Get PDF
    Decades of progress in simulation-based surrogate-assisted optimization and unprecedented growth in computational power have enabled researchers and practitioners to optimize previously intractable complex engineering problems. This paper investigates the possible benefit of a concurrent utilization of multiple simulation-based surrogate models to solve complex discrete optimization problems. To fulfill this, the so-called Self-Adaptive Multi-surrogate Assisted Efficient Global Optimization algorithm (SAMA-DiEGO), which features a two-stage online model management strategy, is proposed and further benchmarked on fifteen binary-encoded combinatorial and fifteen ordinal problems against several state-of-the-art non-surrogate or single surrogate assisted optimization algorithms. Our findings indicate that SAMA-DiEGO can rapidly converge to better solutions on a majority of the test problems, which shows the feasibility and advantage of using multiple surrogate models in optimizing discrete problems

    MULTI-FIDELITY OPTIMIZATION WITH GAUSSIAN REGRESSION ON ORDINAL TRANSFORMATION SPACE

    Get PDF
    Master'sMASTER OF ENGINEERIN

    Multi-Fidelity Methods for Optimization: A Survey

    Full text link
    Real-world black-box optimization often involves time-consuming or costly experiments and simulations. Multi-fidelity optimization (MFO) stands out as a cost-effective strategy that balances high-fidelity accuracy with computational efficiency through a hierarchical fidelity approach. This survey presents a systematic exploration of MFO, underpinned by a novel text mining framework based on a pre-trained language model. We delve deep into the foundational principles and methodologies of MFO, focusing on three core components -- multi-fidelity surrogate models, fidelity management strategies, and optimization techniques. Additionally, this survey highlights the diverse applications of MFO across several key domains, including machine learning, engineering design optimization, and scientific discovery, showcasing the adaptability and effectiveness of MFO in tackling complex computational challenges. Furthermore, we also envision several emerging challenges and prospects in the MFO landscape, spanning scalability, the composition of lower fidelities, and the integration of human-in-the-loop approaches at the algorithmic level. We also address critical issues related to benchmarking and the advancement of open science within the MFO community. Overall, this survey aims to catalyze further research and foster collaborations in MFO, setting the stage for future innovations and breakthroughs in the field.Comment: 47 pages, 9 figure

    Generative-Discriminative Complementary Learning

    Get PDF
    Majority of state-of-the-art deep learning methods are discriminative approaches, which model the conditional distribution of labels given inputs features. The success of such approaches heavily depends on high-quality labeled instances, which are not easy to obtain, especially as the number of candidate classes increases. In this paper, we study the complementary learning problem. Unlike ordinary labels, complementary labels are easy to obtain because an annotator only needs to provide a yes/no answer to a randomly chosen candidate class for each instance. We propose a generative-discriminative complementary learning method that estimates the ordinary labels by modeling both the conditional (discriminative) and instance (generative) distributions. Our method, we call Complementary Conditional GAN (CCGAN), improves the accuracy of predicting ordinary labels and can generate high-quality instances in spite of weak supervision. In addition to the extensive empirical studies, we also theoretically show that our model can retrieve the true conditional distribution from the complementarily-labeled data
    corecore