2,800 research outputs found

    Computational characterization and prediction of metal-organic framework properties

    Full text link
    In this introductory review, we give an overview of the computational chemistry methods commonly used in the field of metal-organic frameworks (MOFs), to describe or predict the structures themselves and characterize their various properties, either at the quantum chemical level or through classical molecular simulation. We discuss the methods for the prediction of crystal structures, geometrical properties and large-scale screening of hypothetical MOFs, as well as their thermal and mechanical properties. A separate section deals with the simulation of adsorption of fluids and fluid mixtures in MOFs

    Methods in machine learning for probabilistic modelling of environment, with applications in meteorology and geology

    Get PDF
    Earth scientists increasingly deal with ‘big data’. Where once we may have struggled to obtain a handful of relevant measurements, we now often have data being collected from multiple sources, on the ground, in the air, and from space. These observations are accumulating at a rate that far outpaces our ability to make sense of them using traditional methods with limited scalability (e.g., mental modelling, or trial-and-error improvement of process based models). The revolution in machine learning offers a new paradigm for modelling the environment: rather than focusing on tweaking every aspect of models developed from the top down based largely on prior knowledge, we now have the capability to instead set up more abstract machine learning systems that can ‘do the tweaking for us’ in order to learn models from the bottom up that can be considered optimal in terms of how well they agree with our (rapidly increasing number of) observations of reality, while still being guided by our prior beliefs. In this thesis, with the help of spatial, temporal, and spatio-temporal examples in meteorology and geology, I present methods for probabilistic modelling of environmental variables using machine learning, and explore the considerations involved in developing and adopting these technologies, as well as the potential benefits they stand to bring, which include improved knowledge-acquisition and decision-making. In each application, the common theme is that we would like to learn predictive distributions for the variables of interest that are well-calibrated and as sharp as possible (i.e., to provide answers that are as precise as possible while remaining honest about their uncertainty). Achieving this requires the adoption of statistical approaches, but the volume and complexity of data available mean that scalability is an important factor — we can only realise the value of available data if it can be successfully incorporated into our models.Engineering and Physical Sciences Research Council (EPSRC

    Optimizing interatomic potentials for phonon properties

    Get PDF
    Molecular dynamics (MD) simulations calculate the trajectory of atoms as a function of time. Material properties that depend on the dynamics of atoms can be predicted in terms of these atomic motions, yielding insight into atomic-level behaviors that ultimately dictate these properties. Such insight is crucial for developing a low-level understanding of material behavior, and MD simulations have been used to successfully predict properties for decades. Thermal transport properties in solids are largely dictated by collective atomic vibrations known as phonons, which can be understood and probed deeply through analysis of MD trajectories. The heart of MD simulations is the mathematical representation of potential energy between atoms, termed the interatomic potential, from which the forces and dynamics are calculated. The use of MD simulations to generally predict and describe thermal transport has not been fully realized due to the lack of accurate interatomic potentials for a variety of systems and obtaining accurate interatomic potentials is not a trivial task. Furthermore, it is not known how to create potentials that are guaranteed to accurately predict phonon properties, and this thesis seeks to answer this question. The goal is to create potentials that accurately predict phonon properties, which are thereby termed phonon optimized potentials (POPs).M.S

    Big-Data Science in Porous Materials: Materials Genomics and Machine Learning

    Full text link
    By combining metal nodes with organic linkers we can potentially synthesize millions of possible metal organic frameworks (MOFs). At present, we have libraries of over ten thousand synthesized materials and millions of in-silico predicted materials. The fact that we have so many materials opens many exciting avenues to tailor make a material that is optimal for a given application. However, from an experimental and computational point of view we simply have too many materials to screen using brute-force techniques. In this review, we show that having so many materials allows us to use big-data methods as a powerful technique to study these materials and to discover complex correlations. The first part of the review gives an introduction to the principles of big-data science. We emphasize the importance of data collection, methods to augment small data sets, how to select appropriate training sets. An important part of this review are the different approaches that are used to represent these materials in feature space. The review also includes a general overview of the different ML techniques, but as most applications in porous materials use supervised ML our review is focused on the different approaches for supervised ML. In particular, we review the different method to optimize the ML process and how to quantify the performance of the different methods. In the second part, we review how the different approaches of ML have been applied to porous materials. In particular, we discuss applications in the field of gas storage and separation, the stability of these materials, their electronic properties, and their synthesis. The range of topics illustrates the large variety of topics that can be studied with big-data science. Given the increasing interest of the scientific community in ML, we expect this list to rapidly expand in the coming years.Comment: Editorial changes (typos fixed, minor adjustments to figures

    Confronting the Challenge of Modeling Cloud and Precipitation Microphysics

    Get PDF
    In the atmosphere, microphysics refers to the microscale processes that affect cloud and precipitation particles and is a key linkage among the various components of Earth\u27s atmospheric water and energy cycles. The representation of microphysical processes in models continues to pose a major challenge leading to uncertainty in numerical weather forecasts and climate simulations. In this paper, the problem of treating microphysics in models is divided into two parts: (i) how to represent the population of cloud and precipitation particles, given the impossibility of simulating all particles individually within a cloud, and (ii) uncertainties in the microphysical process rates owing to fundamental gaps in knowledge of cloud physics. The recently developed Lagrangian particle‐based method is advocated as a way to address several conceptual and practical challenges of representing particle populations using traditional bulk and bin microphysics parameterization schemes. For addressing critical gaps in cloud physics knowledge, sustained investment for observational advances from laboratory experiments, new probe development, and next‐generation instruments in space is needed. Greater emphasis on laboratory work, which has apparently declined over the past several decades relative to other areas of cloud physics research, is argued to be an essential ingredient for improving process‐level understanding. More systematic use of natural cloud and precipitation observations to constrain microphysics schemes is also advocated. Because it is generally difficult to quantify individual microphysical process rates from these observations directly, this presents an inverse problem that can be viewed from the standpoint of Bayesian statistics. Following this idea, a probabilistic framework is proposed that combines elements from statistical and physical modeling. Besides providing rigorous constraint of schemes, there is an added benefit of quantifying uncertainty systematically. Finally, a broader hierarchical approach is proposed to accelerate improvements in microphysics schemes, leveraging the advances described in this paper related to process modeling (using Lagrangian particle‐based schemes), laboratory experimentation, cloud and precipitation observations, and statistical methods

    Integration of advanced methods and models to study drug absorption and related processes : An UNGAP perspective

    Get PDF
    Funding Information: AI acknowledges the support of projects icp009 (ALKOOL) of PRACE-ICEI (grant agreement 800858) for awarding access to Piz Daint, at the Swiss National Supercomputing Centre (CSCS), Switzerland and BG05M2OP001–1.001–0004 (UNITe) of the Bulgarian Ministry of Education and Science. For further details on points raised in this article, please contact [email protected]. Funding Information: Acknowledgements. JAGH is supported by the Biocenter Finland, the Helsinki Institute of Life Sciences, and the Faculty of Pharmacy, University of Helsinki. Publisher Copyright: © 2021 The AuthorsThis collection of contributions from the European Network on Understanding Gastrointestinal Absorption-related Processes (UNGAP) community assembly aims to provide information on some of the current and newer methods employed to study the behaviour of medicines. It is the product of interactions in the immediate pre-Covid period when UNGAP members were able to meet and set up workshops and to discuss progress across the disciplines. UNGAP activities are divided into work packages that cover special treatment populations, absorption processes in different regions of the gut, the development of advanced formulations and the integration of food and pharmaceutical scientists in the food-drug interface. This involves both new and established technical approaches in which we have attempted to define best practice and highlight areas where further research is needed. Over the last months we have been able to reflect on some of the key innovative approaches which we were tasked with mapping, including theoretical, in silico, in vitro, in vivo and ex vivo, preclinical and clinical approaches. This is the product of some of us in a snapshot of where UNGAP has travelled and what aspects of innovative technologies are important. It is not a comprehensive review of all methods used in research to study drug dissolution and absorption, but provides an ample panorama of current and advanced methods generally and potentially useful in this area. This collection starts from a consideration of advances in a priori approaches: an understanding of the molecular properties of the compound to predict biological characteristics relevant to absorption. The next four sections discuss a major activity in the UNGAP initiative, the pursuit of more representative conditions to study lumenal dissolution of drug formulations developed independently by academic teams. They are important because they illustrate examples of in vitro simulation systems that have begun to provide a useful understanding of formulation behaviour in the upper GI tract for industry. The Leuven team highlights the importance of the physiology of the digestive tract, as they describe the relevance of gastric and intestinal fluids on the behaviour of drugs along the tract. This provides the introduction to microdosing as an early tool to study drug disposition. Microdosing in oncology is starting to use gamma-emitting tracers, which provides a link through SPECT to the next section on nuclear medicine. The last two papers link the modelling approaches used by the pharmaceutical industry, in silico to Pop-PK linking to Darwich and Aarons, who provide discussion on pharmacometric modelling, completing the loop of molecule to man.Peer reviewe

    Evidential Deep Learning: Enhancing Predictive Uncertainty Estimation for Earth System Science Applications

    Full text link
    Robust quantification of predictive uncertainty is critical for understanding factors that drive weather and climate outcomes. Ensembles provide predictive uncertainty estimates and can be decomposed physically, but both physics and machine learning ensembles are computationally expensive. Parametric deep learning can estimate uncertainty with one model by predicting the parameters of a probability distribution but do not account for epistemic uncertainty.. Evidential deep learning, a technique that extends parametric deep learning to higher-order distributions, can account for both aleatoric and epistemic uncertainty with one model. This study compares the uncertainty derived from evidential neural networks to those obtained from ensembles. Through applications of classification of winter precipitation type and regression of surface layer fluxes, we show evidential deep learning models attaining predictive accuracy rivaling standard methods, while robustly quantifying both sources of uncertainty. We evaluate the uncertainty in terms of how well the predictions are calibrated and how well the uncertainty correlates with prediction error. Analyses of uncertainty in the context of the inputs reveal sensitivities to underlying meteorological processes, facilitating interpretation of the models. The conceptual simplicity, interpretability, and computational efficiency of evidential neural networks make them highly extensible, offering a promising approach for reliable and practical uncertainty quantification in Earth system science modeling. In order to encourage broader adoption of evidential deep learning in Earth System Science, we have developed a new Python package, MILES-GUESS (https://github.com/ai2es/miles-guess), that enables users to train and evaluate both evidential and ensemble deep learning
    • 

    corecore