11 research outputs found

    How to Accelerate R&D and Optimize Experiment Planning with Machine Learning and Data Science

    Get PDF
    Accelerating R&D is essential to address some of the challenges humanity is currently facing, such as achieving the global sustainability goals. Today’s Edisonian approach of trial-and-error still prevalent in R&D labs takes up to two decades of fundamental and applied research for new materials to reach the market. Turning around this situation calls for strategies to upgrade R&D and expedite innovation. By conducting smart experiment planning that is data-driven and guided by AI/ML, researchers can more efficiently search through the complex - often constrained - space of possible experiments and find or hit the global optima much faster than with the current approaches. Moreover, with digitized data management, researchers will be able to maximize the utility of their data in the short and long terms with the aid of statistics, ML and visualization tools. In what follows, we describe a framework and lay out the key technologies to accelerate R&D and optimize experiment plannin

    Bridging the gap: The role of innovation policy and market creation

    Get PDF
    By pairing innovation in the use of existing technologies and in behaviour with new technologies, directed innovation has the potential to radically transform societies and reduce their greenhouse gas (GHG) emissions. Therefore, accelerating innovation is a key component of any attempt to close the emissions gap, but it will not happen by itself

    Determination of nutrient salts by automatic methods both in seawater and brackish water: the phosphate blank

    Get PDF
    9 páginas, 2 tablas, 2 figurasThe main inconvenience in determining nutrients in seawater by automatic methods is simply solved: the preparation of a suitable blank which corrects the effect of the refractive index change on the recorded signal. Two procedures are proposed, one physical (a simple equation to estimate the effect) and the other chemical (removal of the dissolved phosphorus with ferric hydroxide).Support for this work came from CICYT (MAR88-0245 project) and Conselleria de Pesca de la Xunta de GaliciaPeer reviewe

    Equipping data-driven experiment planning for Self-driving Laboratories with semantic memory: case studies of transfer learning in chemical reaction optimization

    No full text
    Optimization strategies based on machine learning (ML), such as Bayesian optimization, show promise across the experimental sciences as a superior alternative to traditional design of experiment. Deploying ML optimization tools in R\&D operations increases productivity and efficiency, while reducing the time and cost necessary to identify new molecules, materials, and process parameters with desired target properties. Additional benefits can be captured when combining these ML algorithms with automated laboratory equipment with Atinary’s orchestration software platform SDLabs. The synergy of these technologies are referred to as Self-driving Laboratories, which hold the potential to revolutionize scientific experimentation, data collection, and materials discovery. Thus far, however, autonomous experimentation projects have not fully leveraged pre-existing knowledge and databases, often beginning from scratch and sequentially collecting measurements from new experiments. This is in stark contrast to experimentation by humans, where trained experts rely on intuition acquired from experience to select initial parameter settings for a novel experiment. In this work, we introduce Atinary’s transfer learning algorithm SeMOptt, a general-purpose Bayesian optimization framework which uses meta-/few-shot learning to efficiently transfer knowledge from related historical experiments and databases to a novel experimental campaign via a compound acquisition function. We apply SeMOpt to chemical reaction optimization, an important and challenging task in chemistry. Specifically, we perform two case studies: i) the optimization of five simulated cross-coupling reactions, which demonstrates the ability of our approach to adapt to data with unknown effects, such as the presence of a side reaction, catalyst deactivation, and measurement noise; ii) the optimization of palladium-catalyzed Buchwald-Hartwig cross-coupling of aryl halides with 4-methylaniline in the presence of potentially inhibitory additives. We find that SeMOpt accelerates the optimization rate by a factor of 10 or more compared to standard single-task ML optimizers (those without transfer learning capabilities to leverage historical experiments or databases). Moreover, these case studies show that \semopt outperforms several existing ML Bayesian optimization strategies that leverage historical data. Thus, we believe this work presents a valuable technical contribution for general-purpose optimization and makes the case to replace the traditional trial-and-error experimentation process with Self-driving Labs augmented with semantic memory

    How to Accelerate R&D and Optimize Experiment Planning with Machine Learning and Data Science

    Get PDF
    Accelerating R&D is essential to address some of the challenges humanity is currently facing, such as achieving the global sustainability goals. Today’s Edisonian approach of trial-and-error still prevalent in R&D labs takes up to two decades of fundamental and applied research for new materials to reach the market. Turning around this situation calls for strategies to upgrade R&D and expedite innovation. By conducting smart experiment planning that is data-driven and guided by AI/ML, researchers can more efficiently search through the complex - often constrained - space of possible experiments and find or hit the global optima much faster than with the current approaches. Moreover, with digitized data management, researchers will be able to maximize the utility of their data in the short and long terms with the aid of statistics, ML and visualization tools. In what follows, we describe a framework and lay out the key technologies to accelerate R&D and optimize experiment plannin

    Accelerated Exploration of Heterogeneous CO2 Hydrogenation Catalysts by Bayesian Optimized High-throughput and Automated Experimentation.

    No full text
    Automated high-throughput platforms and Artificial Intelligence (AI) are already accelerating discovery and optimization in various fields of chemistry and chemical engineering. However, despite some promising solutions, little to no attempts have targeted the full heterogeneous catalyst discovery workflow, with most chemistry laboratories continuing to perform research with a traditional one-at-a-time experiment approach and limited digitization. In this work, we present a closed-loop data-driven approach targeting the optimization of catalysts’ composition for the direct transformation of carbon dioxide (CO2) into methanol, by combining Bayesian Optimization (BO) algorithm, automated synthesis by incipient wetness impregnation and high-throughput catalytic performance evaluation in fixed bed mode. The BO algorithm optimized a four-objective function simultaneously (high CO2 conversion, high methanol selectivity, low methane selectivity, and low metal cost) with a total of 11 parameters (4 supports, 6 metals salts, and one promoter). In 6 weeks, 144 catalysts were synthesized and tested, with limited manual laboratory activity. The results show a significant improvement in the objectives at the end of each iteration. Between the first and fifth catalyst generation, the average CO2 conversion and methanol formation rates have been multiplied by 5.7 and 12.6 respectively, while simultaneously reducing the methane production rate by 3.2 and dividing the metal cost by 6.3 times. Notably, through the exploration process, the BO algorithm rapidly focuses on copper-based catalysts supported on zirconia doped with Zinc and/or Cerium, with the best catalysts, according to the model, showing an optimized composition of 1.85wt% Cu, 0.69wt% Zn, and 0.05wt% Ce supported on ZrO2. When changing the objective, i.e. removing the metal cost as a constrain, the BO algorithm suggests compositions centered on Indium-based catalysts, highlighting an alternative family of catalysts, testifying of the algorithm adaptability and the reusability of the data when targeting different objectives. In only 30 days, the BO, coupled with automated synthesis and high-throughput testing, has been able to replicate the major development stages in the field of heterogeneous catalysts research for CO2 conversion to methanol, made over of the last 100 years with a conventional experimental approach. This data-driven approach proves to be very efficient in exploring and optimizing catalyst composition from the vast multi-parameter space towards multiple performance objectives simultaneously and could be easily extrapolated to different parameter spaces, objectives, and be transposed to other applications
    corecore