128 research outputs found

    Sampling high-dimensional design spaces for analysis and optimization

    Get PDF

    Constrained multi-objective optimization of process design parameters in settings with scarce data: an application to adhesive bonding

    Full text link
    Adhesive joints are increasingly used in industry for a wide variety of applications because of their favorable characteristics such as high strength-to-weight ratio, design flexibility, limited stress concentrations, planar force transfer, good damage tolerance and fatigue resistance. Finding the optimal process parameters for an adhesive bonding process is challenging: the optimization is inherently multi-objective (aiming to maximize break strength while minimizing cost) and constrained (the process should not result in any visual damage to the materials, and stress tests should not result in failures that are adhesion-related). Real life physical experiments in the lab are expensive to perform; traditional evolutionary approaches (such as genetic algorithms) are then ill-suited to solve the problem, due to the prohibitive amount of experiments required for evaluation. In this research, we successfully applied specific machine learning techniques (Gaussian Process Regression and Logistic Regression) to emulate the objective and constraint functions based on a limited amount of experimental data. The techniques are embedded in a Bayesian optimization algorithm, which succeeds in detecting Pareto-optimal process settings in a highly efficient way (i.e., requiring a limited number of extra experiments)

    Batch Bayesian active learning for feasible region identification by local penalization

    Get PDF
    Identifying all designs satisfying a set of constraints is an important part of the engineering design process. With physics-based simulation codes, evaluating the constraints becomes considerable expensive. Active learning can provide an elegant approach to efficiently characterize the feasible region, i.e., the set of feasible designs. Although active learning strategies have been proposed for this task, most of them are dealing with adding just one sample per iteration as opposed to selecting multiple samples per iteration, also known as batch active learning. While this is efficient with respect to the amount of information gained per iteration, it neglects available computation resources. We propose a batch Bayesian active learning technique for feasible region identification by assuming that the constraint function is Lipschitz continuous. In addition, we extend current state-of-the-art batch methods to also handle feasible region identification. Experiments show better performance of the proposed method than the extended batch methods

    Contextual Optimizer through Neighborhood Estimation for prescriptive analysis

    Full text link
    We address the challenges posed by heteroscedastic noise in contextual decision-making. We propose a consistent Shrinking Neighborhood Estimation (SNE) technique that successfully estimates contextual performance under unpredictable variances. Furthermore, we propose a Rate-Efficient Sampling rule designed to enhance the performance of the SNE. The effectiveness of the combined solution ``Contextual Optimizer through Neighborhood Estimation"(CONE) is validated through theorems and numerical benchmarking. The methodologies have been further deployed to address a staffing challenge in a hospital call center, exemplifying their substantial impact and practical utility in real-world scenarios

    Sensitivity Prewarping for Local Surrogate Modeling

    Full text link
    In the continual effort to improve product quality and decrease operations costs, computational modeling is increasingly being deployed to determine feasibility of product designs or configurations. Surrogate modeling of these computer experiments via local models, which induce sparsity by only considering short range interactions, can tackle huge analyses of complicated input-output relationships. However, narrowing focus to local scale means that global trends must be re-learned over and over again. In this article, we propose a framework for incorporating information from a global sensitivity analysis into the surrogate model as an input rotation and rescaling preprocessing step. We discuss the relationship between several sensitivity analysis methods based on kernel regression before describing how they give rise to a transformation of the input variables. Specifically, we perform an input warping such that the "warped simulator" is equally sensitive to all input directions, freeing local models to focus on local dynamics. Numerical experiments on observational data and benchmark test functions, including a high-dimensional computer simulator from the automotive industry, provide empirical validation

    Design and analysis of computer experiments for stochastic systems

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH
    corecore