9 research outputs found

    Mixed-Variable Global Sensitivity Analysis For Knowledge Discovery And Efficient Combinatorial Materials Design

    Full text link
    Global Sensitivity Analysis (GSA) is the study of the influence of any given inputs on the outputs of a model. In the context of engineering design, GSA has been widely used to understand both individual and collective contributions of design variables on the design objectives. So far, global sensitivity studies have often been limited to design spaces with only quantitative (numerical) design variables. However, many engineering systems also contain, if not only, qualitative (categorical) design variables in addition to quantitative design variables. In this paper, we integrate Latent Variable Gaussian Process (LVGP) with Sobol' analysis to develop the first metamodel-based mixed-variable GSA method. Through numerical case studies, we validate and demonstrate the effectiveness of our proposed method for mixed-variable problems. Furthermore, while the proposed GSA method is general enough to benefit various engineering design applications, we integrate it with multi-objective Bayesian optimization (BO) to create a sensitivity-aware design framework in accelerating the Pareto front design exploration for metal-organic framework (MOF) materials with many-level combinatorial design spaces. Although MOFs are constructed only from qualitative variables that are notoriously difficult to design, our method can utilize sensitivity analysis to navigate the optimization in the many-level large combinatorial design space, greatly expediting the exploration of novel MOF candidates.Comment: 35 Pages, 10 Figures, 2 Table

    Rapid Design of Top-Performing Metal-Organic Frameworks with Qualitative Representations of Building Blocks

    Full text link
    Data-driven materials design often encounters challenges where systems require or possess qualitative (categorical) information. Metal-organic frameworks (MOFs) are an example of such material systems. The representation of MOFs through different building blocks makes it a challenge for designers to incorporate qualitative information into design optimization. Furthermore, the large number of potential building blocks leads to a combinatorial challenge, with millions of possible MOFs that could be explored through time consuming physics-based approaches. In this work, we integrated Latent Variable Gaussian Process (LVGP) and Multi-Objective Batch-Bayesian Optimization (MOBBO) to identify top-performing MOFs adaptively, autonomously, and efficiently without any human intervention. Our approach provides three main advantages: (i) no specific physical descriptors are required and only building blocks that construct the MOFs are used in global optimization through qualitative representations, (ii) the method is application and property independent, and (iii) the latent variable approach provides an interpretable model of qualitative building blocks with physical justification. To demonstrate the effectiveness of our method, we considered a design space with more than 47,000 MOF candidates. By searching only ~1% of the design space, LVGP-MOBBO was able to identify all MOFs on the Pareto front and more than 97% of the 50 top-performing designs for the CO2_2 working capacity and CO2_2/N2_2 selectivity properties. Finally, we compared our approach with the Random Forest algorithm and demonstrated its efficiency, interpretability, and robustness.Comment: 35 pages total. First 29 pages belong to the main manuscript and the remaining 6 six are for the supplementary information, 13 figures total. 9 figures are on the main manuscript and 4 figures are in the supplementary information. 1 table in the supplementary informatio

    A Latent Variable Approach for Non-Hierarchical Multi-Fidelity Adaptive Sampling

    Full text link
    Multi-fidelity (MF) methods are gaining popularity for enhancing surrogate modeling and design optimization by incorporating data from various low-fidelity (LF) models. While most existing MF methods assume a fixed dataset, adaptive sampling methods that dynamically allocate resources among fidelity models can achieve higher efficiency in the exploring and exploiting the design space. However, most existing MF methods rely on the hierarchical assumption of fidelity levels or fail to capture the intercorrelation between multiple fidelity levels and utilize it to quantify the value of the future samples and navigate the adaptive sampling. To address this hurdle, we propose a framework hinged on a latent embedding for different fidelity models and the associated pre-posterior analysis to explicitly utilize their correlation for adaptive sampling. In this framework, each infill sampling iteration includes two steps: We first identify the location of interest with the greatest potential improvement using the high-fidelity (HF) model, then we search for the next sample across all fidelity levels that maximize the improvement per unit cost at the location identified in the first step. This is made possible by a single Latent Variable Gaussian Process (LVGP) model that maps different fidelity models into an interpretable latent space to capture their correlations without assuming hierarchical fidelity levels. The LVGP enables us to assess how LF sampling candidates will affect HF response with pre-posterior analysis and determine the next sample with the best benefit-to-cost ratio. Through test cases, we demonstrate that the proposed method outperforms the benchmark methods in both MF global fitting (GF) and Bayesian Optimization (BO) problems in convergence rate and robustness. Moreover, the method offers the flexibility to switch between GF and BO by simply changing the acquisition function

    Algorithms for Self-Optimising Chemical Platforms

    Get PDF
    The appreciable interest in machine learning has stimulated the development of self-optimising chemical platforms. The power of harnessing computer aided design, coupled with the desire for improved process sustainability and economics, has led to self-optimising systems being applied to the optimisation of reaction screening and chemical synthesis. The algorithms used in these systems have largely been limited to a select few, with little focus paid to the development of optimisation algorithms specifically for chemical systems. The expanding digitisation of the process development pipeline necessitates the further development of algorithms to tackle the diverse array of chemistries and systems .Improvements and expansion to the available algorithmic portfolio will enable the wider adoption of automated optimisation systems, with novel algorithms required to match the previously unmet domain specific demands and improve upon classical designed experiment procedures which may offer a reduction in optimisation efficiency. The work in this thesis looks to develop novel approaches, targeting areas currently lacking or under developed in automated chemical system optimisations. This includes development and application of hybrid approaches looking at improving the robustness of optimisation and increasing the users understanding of the optimum region, as well as expanding multi-objective algorithms to the mixed variable domain, enabling the wider application of efficient optimisation and data acquisition methodologies
    corecore