27 research outputs found

    Fast Kinetic Monte Carlo Simulations: Implementation, Application, and Analysis.

    Full text link
    This work presents a multi-component kinetic Monte Carlo (KMC) model and its applications to three example systems: Ga droplet epitaxy, nanowires grown by the Vapor-Liquid-Solid (VLS) method, and sintering of porous granular material. The first two systems are examples of liquid mediated growth. We detail how the liquid phase is modeled. A caching technique is proposed to eliminate redundant calculations, leading to performance gains. Underlying the cache is a hash table, indexed by neighborhood patterns of an atom configuration. We present numerical evidence that such neighborhood patterns are redundant within and between configurations, justifying the caching procedure. A simulated annealing search for optimal, system-specific hash functions is performed. Simulation results and analysis of droplet epitaxy are then described. We detail the calibration of model parameters, exhibiting a good agreement with homoepitaxial thin film experiments. Droplet epitaxy simulations capture a variety of nanostrutures seen in experiments, ranging from compact dots to nanorings. The correct trends in growth conditions are also captured, resulting in a phase diagram consistent with what is seen experimentally. Core-shell structures are also simulated. We present simulations to suggest the existence of two mechanisms behind the their formation: nucleation at the vapor-liquid interface and an instability at the vapor-solid interface. An analytical model is developed and isolates the relevant processes behind the formation of the phenomena seen throughout the simulations and in experiments. In the VLS nanowire simulations, we present how the catalyzed role of the liquid phase is incorporated into the model and perform an energy parameter study. We exhibit the role of the catalyzed reaction rate and its contribution to growth leading to features such as tapering. The mobility along the liquid-solid interface is also studied. We show how this affects nanowire growth direction and kinking. In the sintering simulations, we present the KMC model in contrast with previous simulation work. A similar parameter study is then performed by studying the effect of parameters on coarsening statistics. Grain statistics are measured as a function of time and captures a power-law behavior for the grain radius. Critical behavior with respect to certain parameters is also presented.PhDApplied and Interdisciplinary MathematicsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/99949/1/kgre_1.pd

    A Knowledge Gradient Policy for Sequencing Experiments to Identify the Structure of RNA Molecules Using a Sparse Additive Belief Model

    Full text link
    We present a sparse knowledge gradient (SpKG) algorithm for adaptively selecting the targeted regions within a large RNA molecule to identify which regions are most amenable to interactions with other molecules. Experimentally, such regions can be inferred from fluorescence measurements obtained by binding a complementary probe with fluorescence markers to the targeted regions. We use a biophysical model which shows that the fluorescence ratio under the log scale has a sparse linear relationship with the coefficients describing the accessibility of each nucleotide, since not all sites are accessible (due to the folding of the molecule). The SpKG algorithm uniquely combines the Bayesian ranking and selection problem with the frequentist â„“1\ell_1 regularized regression approach Lasso. We use this algorithm to identify the sparsity pattern of the linear model as well as sequentially decide the best regions to test before experimental budget is exhausted. Besides, we also develop two other new algorithms: batch SpKG algorithm, which generates more suggestions sequentially to run parallel experiments; and batch SpKG with a procedure which we call length mutagenesis. It dynamically adds in new alternatives, in the form of types of probes, are created by inserting, deleting or mutating nucleotides within existing probes. In simulation, we demonstrate these algorithms on the Group I intron (a mid-size RNA molecule), showing that they efficiently learn the correct sparsity pattern, identify the most accessible region, and outperform several other policies

    A Rigorous Uncertainty-Aware Quantification Framework Is Essential for Reproducible and Replicable Machine Learning Workflows

    Full text link
    The ability to replicate predictions by machine learning (ML) or artificial intelligence (AI) models and results in scientific workflows that incorporate such ML/AI predictions is driven by numerous factors. An uncertainty-aware metric that can quantitatively assess the reproducibility of quantities of interest (QoI) would contribute to the trustworthiness of results obtained from scientific workflows involving ML/AI models. In this article, we discuss how uncertainty quantification (UQ) in a Bayesian paradigm can provide a general and rigorous framework for quantifying reproducibility for complex scientific workflows. Such as framework has the potential to fill a critical gap that currently exists in ML/AI for scientific workflows, as it will enable researchers to determine the impact of ML/AI model prediction variability on the predictive outcomes of ML/AI-powered workflows. We expect that the envisioned framework will contribute to the design of more reproducible and trustworthy workflows for diverse scientific applications, and ultimately, accelerate scientific discoveries

    Identifying Bayesian Optimal Experiments for Uncertain Biochemical Pathway Models

    Full text link
    Pharmacodynamic (PD) models are mathematical models of cellular reaction networks that include drug mechanisms of action. These models are useful for studying predictive therapeutic outcomes of novel drug therapies in silico. However, PD models are known to possess significant uncertainty with respect to constituent parameter data, leading to uncertainty in the model predictions. Furthermore, experimental data to calibrate these models is often limited or unavailable for novel pathways. In this study, we present a Bayesian optimal experimental design approach for improving PD model prediction accuracy. We then apply our method using simulated experimental data to account for uncertainty in hypothetical laboratory measurements. This leads to a probabilistic prediction of drug performance and a quantitative measure of which prospective laboratory experiment will optimally reduce prediction uncertainty in the PD model. The methods proposed here provide a way forward for uncertainty quantification and guided experimental design for models of novel biological pathways

    A Bayesian experimental autonomous researcher for mechanical design

    Get PDF
    While additive manufacturing (AM) has facilitated the production of complex structures, it has also highlighted the immense challenge inherent in identifying the optimum AM structure for a given application. Numerical methods are important tools for optimization, but experiment remains the gold standard for studying nonlinear, but critical, mechanical properties such as toughness. To address the vastness of AM design space and the need for experiment, we develop a Bayesian experimental autonomous researcher (BEAR) that combines Bayesian optimization and high-throughput automated experimentation. In addition to rapidly performing experiments, the BEAR leverages iterative experimentation by selecting experiments based on all available results. Using the BEAR, we explore the toughness of a parametric family of structures and observe an almost 60-fold reduction in the number of experiments needed to identify high-performing structures relative to a grid-based search. These results show the value of machine learning in experimental fields where data are sparse.Published versio
    corecore