22 research outputs found

    Novel techniques for kinetic model identification and improvement

    Get PDF
    Physics-based kinetic models are regarded as key tools for supporting the design and control of chemical processes and for understanding which degrees of freedom ultimately determine the observed behaviour of chemical systems. These models are formulated as sets of differential and algebraic equations where many state variables and parameters may be involved. Nonetheless, the translation of the available experimental evidence into an appropriate set of model equations is a time and resource intensive task that significantly relies on the presence of experienced scientists. Automated reactor platforms are increasingly being applied in research laboratories to generate large amounts of kinetic data with minimum human intervention. However, in most cases, these platforms do not implement software for the online identification of physics-based kinetic models. While automated reactor technologies have significantly improved the efficiency in the data collection process, the analysis of the data for modelling purposes still represents a tedious process that is mainly carried out a-posteriori by the scientist. This project focuses on how to systematically solve some relevant problems in kinetic modelling studies that would normally require the intervention of experienced modellers to be addressed. Specifically, the following challenges are considered: i) the selection of a robust model parametrisation to reduce the chance of numerical failures in the course of the model identification process; ii) the experimental design and parameter estimation problems in conditions of structural model uncertainty; iii) the improvement of approximated models embracing the available experimental evidence. The work presented in this Thesis paves the way towards fully automated kinetic modelling platforms through the development of intelligent algorithms for experimental design and model building under system uncertainty. The project aims at the definition of comprehensive and systematic modelling frameworks to make the modelling activity more efficient and less sensitive to human error and bias

    A Multi-Objective Optimal Experimental Design Framework for Enhancing the Efficiency of Online Model-Identification Platforms

    Get PDF
    Recent advances in automation and digitization enable the close integration of physical devices with their virtual counterparts, facilitating the real-time modeling and optimization of a multitude of processes in an automatic way. The rich and continuously updated data environment provided by such systems makes it possible for decisions to be made over time to drive the process toward optimal targets. In many manufacturing processes, in order to achieve an overall optimal process, the simultaneous assessment of multiple objective functions related to process performance and cost is necessary. In this work, a multi-objective optimal experimental design framework is proposed to enhance the efficiency of online model-identification platforms. The proposed framework permits flexibility in the choice of trade-off experimental design solutions, which are calculated online—that is, during the execution of experiments. The application of this framework to improve the online identification of kinetic models in flow reactors is illustrated using a case study in which a kinetic model is identified for the esterification of benzoic acid (BA) and ethanol in a microreactor

    Closed-Loop Model-Based Design of Experiments for Kinetic Model Discrimination and Parameter Estimation: Benzoic Acid Esterification on a Heterogeneous Catalyst

    Get PDF
    An autonomous reactor platform was developed to rapidly identify a kinetic model for the esterification of benzoic acid with ethanol with the heterogeneous Amberlyst-15 catalyst. A five-step methodology for kinetic studies was employed to systematically reduce the number of experiments required to identify a practical kinetic model. This included (i) initial screening using traditional factorial designed steady-state experiments, (ii) proposing and testing candidate kinetic models, (iii) performing an identifiability analysis to reject models whose model parameters cannot be estimated for a given experimental budget, (iv) performing online Model-Based Design of Experiments (MBDoE) for model discrimination to identify the best model from a list of candidates, and (v) performing online MBDoE for improving parameter precision for the chosen model. This methodology combined with the reactor platform, which conducted all kinetic experiments unattended, reduces the number of experiments and time required to identify kinetic models, significantly increasing lab productivity

    Mechanistic modelling of separating dispersions in pipes using model-based design of experiments techniques

    Get PDF
    This work presents a parametric study on a mechanistic model for separating liquid–liquid dispersions in pipes. The model considers drop-settling, drop-interface coalescence and drop-drop coalescence, predicting the evolution of four characteristic layers during separation. Parameter estimation, parametric sensitivity analysis (PSA), and model-based design of experiments (MBDoE) techniques are employed to acquire precise parameter estimates and propose optimal experimental conditions, thereby enhancing the accuracy of existing models. Experimental data from literature using oil-in-water dispersions are used for parameter estimation. PSA reveals regions of high sensitivity of the model outputs to uncertain parameters, which are corresponding to favourable sampling locations. Manipulating the mixture velocity, the dispersed phase fraction, and the layer heights at the inlet influences these sensitive regions. Clustered measurements around highly sensitive regions in the pipe enhance the information content they provide. MBDoE demonstrates that either of the A-, D-, or E-optimal experimental design criteria improves the expected parameter precision

    The Development of Microreactor Technology for the Study of Multistep Catalytic Systems and Rapid Kinetic Modelling

    Get PDF
    Microreactor technology was applied to the study of catalytic systems because their high rates of heat and mass transport, improved safety and ease of automation makes them particularly effective research tools in this area. A multistep flow system for the synthesis of benzylacetone from benzyl alcohol via oxidation, aldol condensation and reduction reactions was developed by utilising three micropacked bed reactors and a gas liquid membrane separator. This reaction had previously been conducted in batch cascade, however, the multistep flow system enabled the achievement of higher yields with lower catalyst contact times because separating each reaction into its own reactor allowed greater freedom to tailor the operating conditions for each reaction. The multistep system also allowed the catalysts to be studied in a process wide environment, leading to the identification of significant catalyst inhibition due to by and co-products from upstream reactions. An automated closed loop microreactor platform was developed which utilised Model-Based Design of Experiments (MBDoE) algorithms for rapid kinetic modelling of catalytic reactions. The automated platform was first applied to the homogenous esterification of benzoic acid with ethanol using a sulfuric acid catalyst, where a campaign of steady-state experiments designed by online MBDoE led to the estimation of kinetic parameters with much higher precision than a factorial campaign of experiments. This reaction was then conducted with MBDoE designed transient experiments, which dramatically reduced the experimental time required. The same reaction was studied using a heterogeneous Amberlyst-15 catalyst, and by combining factorial designs, practical identifiability tests and MBDoE for model discrimination and parameter precision, a practical kinetic model was identified in just 3 days. The automated platform was applied to the oxidation of 5-hydroxymethylfurfural in a micropacked bed reactor with gas-liquid flow using AuPd/TiO2 catalysts, however due to poor experimental reproducibility, a kinetic model was not identified

    Probabilistic Inference for Model Based Control

    Get PDF
    Robotic systems are essential for enhancing productivity, automation, and performing hazardous tasks. Addressing the unpredictability of physical systems, this thesis advances robotic planning and control under uncertainty, introducing learning-based methods for managing uncertain parameters and adapting to changing environments in real-time. Our first contribution is a framework using Bayesian statistics for likelihood-free inference of model parameters. This allows employing complex simulators for designing efficient, robust controllers. The method, integrating the unscented transform with a variant of information theoretical model predictive control, shows better performance in trajectory evaluation compared to Monte Carlo sampling, easing the computational load in various control and robotics tasks. Next, we reframe robotic planning and control as a Bayesian inference problem, focusing on the posterior distribution of actions and model parameters. An implicit variational inference algorithm, performing Stein Variational Gradient Descent, estimates distributions over model parameters and control inputs in real-time. This Bayesian approach effectively handles complex multi-modal posterior distributions, vital for dynamic and realistic robot navigation. Finally, we tackle diversity in high-dimensional spaces. Our approach mitigates underestimation of uncertainty in posterior distributions, which leads to locally optimal solutions. Using the theory of rough paths, we develop an algorithm for parallel trajectory optimisation, enhancing solution diversity and avoiding mode collapse. This method extends our variational inference approach for trajectory estimation, employing diversity-enhancing kernels and leveraging path signature representation of trajectories. Empirical tests, ranging from 2-D navigation to robotic manipulators in cluttered environments, affirm our method's efficiency, outperforming existing alternatives

    Large-scale variational inference for Bayesian joint regression modelling of high-dimensional genetic data

    Get PDF
    Genetic association studies have become increasingly important in understanding the molecular bases of complex human traits. The specific analysis of intermediate molecular traits, via quantitative trait locus (QTL) studies, has recently received much attention, prompted by the advance of high-throughput technologies for quantifying gene, protein and metabolite levels. Of great interest is the detection of weak trans-regulatory effects between a genetic variant and a distal gene product. In particular, hotspot genetic variants, which remotely control the levels of many molecular outcomes, may initiate decisive functional mechanisms underlying disease endpoints. This thesis proposes a Bayesian hierarchical approach for joint analysis of QTL data on a genome-wide scale. We consider a series of parallel sparse regressions combined in a hierarchical manner to flexibly accommodate high-dimensional responses (molecular levels) and predictors (genetic variants), and we present new methods for large-scale inference. Existing approaches have limitations. Conventional marginal screening does not account for local dependencies and association patterns common to multiple outcomes and genetic variants, whereas joint modelling approaches are restricted to relatively small datasets by computational constraints. Our novel framework allows information-sharing across outcomes and variants, thereby enhancing the detection of weak trans and hotspot effects, and implements tailored variational inference procedures that allow simultaneous analysis of data for an entire QTL study, comprising hundreds of thousands of predictors, and thousands of responses and samples. The present work also describes extensions to leverage spatial and functional information on the genetic variants, for example, using predictor-level covariates such as epigenomic marks. Moreover, we augment variational inference with simulated annealing and parallel expectation-maximisation schemes in order to enhance exploration of highly multimodal spaces and allow efficient empirical Bayes estimation. Our methods, publicly available as packages implemented in R and C++, are extensively assessed in realistic simulations. Their advantages are illustrated in several QTL applications, including a large-scale proteomic QTL study on two clinical cohorts that highlights novel candidate biomarkers for metabolic disorders

    Structural identifiability of large systems biology models

    Get PDF
    A fundamental principle of systems biology is its perpetual need for new technologies that can solve challenging biological questions. This precept will continue to drive the development of novel analytical tools. The virtuous cycle of biological progress can therefore only exist when experts from different disciplines including biology, chemistry, computer science, engineering, mathematics, and medicine collaborate. General opinion is however that one of the challenges facing the systems biology community is the lag in the development of such technologies. The topic of structural identifiability in particular has been of interest to the systems biology community. This is because researchers in this field often face experimental limitations. These limitations, combined with the fact that systems biology models can contain vast numbers of unknown parameters, necessitate an identifiability analysis. In reality, analysing the structural identifiability of systems biology models, even when they contain only a few states and system parameters, may be challenging. As these models increase in size and complexity, this difficulty is exasperated, and one becomes limited to only a few methods capable of analysing large ordinary differential equation models. In this thesis I study the use of a computationally efficient algorithm, well suited to the analysis of large models, in the model development process. The three related objectives of this thesis are: 1) develop an accurate method to asses the structural identifiability of large possibly nonlinear ordinary differential models, 2) implement thismethod in the preliminary design of experiments, and 3) use the method to address the topic of structural unidentifiability. To improve the method’s accuracy, I systematically study the role of individual factors, such as the number of experimentally measured sensors, on the sharpness of results. Based on the findings, I propose measures that can improve numerical accuracy. To address the second objective, I introduce an iterative identifiability algorithm that can determine minimal sets of outputs that need to be measured to ensure a model’s local structural identifiability. I also illustrate how one could potentially reduce the computational demand of the algorithm, enabling a user to detect minimal output sets of large ordinary differential equation models within minutes. For the last objective, I investigate the role of initial conditions in a model’s structural unidentifiability. I show that the method can detect problematic values for large ordinary differential equation models. I illustrate its role in reinstating the local structural identifiability of a model by identifying problematic initial conditions. I also show that the method can provide theoretical suggestions for the reparameterization of structurally unidentifiable models. The novelty of this work is that the algorithm allows for unknown initial conditions to be parameterised and accordingly, repameterisations requiring the transformation of states, associated with unidentifiable initial conditions, can easily be obtained. The computational efficiency of the method allows for the reparameterisation of large ordinary differential equation models in particular. To conclude, in this thesis I introduce an method that can be used during the model development process in an array of useful applications. These include: 1) determining minimal output sets, 2) reparameterising structurally unidentifiable models and 3) detecting problematic initial conditions. Each of these application can be implemented before any experiments are conducted and can play a potential role in the optimisation of the modelling process

    Behaviour-driven motion synthesis

    Get PDF
    Heightened demand for alternatives to human exposure to strenuous and repetitive labour, as well as to hazardous environments, has led to an increased interest in real-world deployment of robotic agents. Targeted applications require robots to be adept at synthesising complex motions rapidly across a wide range of tasks and environments. To this end, this thesis proposes leveraging abstractions of the problem at hand to ease and speed up the solving. We formalise abstractions to hint relevant robotic behaviour to a family of planning problems, and integrate them tightly into the motion synthesis process to make real-world deployment in complex environments practical. We investigate three principal challenges of this proposition. Firstly, we argue that behavioural samples in form of trajectories are of particular interest to guide robotic motion synthesis. We formalise a framework with behavioural semantic annotation that enables the storage and bootstrap of sets of problem-relevant trajectories. Secondly, in the core of this thesis, we study strategies to exploit behavioural samples in task instantiations that differ significantly from those stored in the framework. We present two novel strategies to efficiently leverage offline-computed problem behavioural samples: (i) online modulation based on geometry-tuned potential fields, and (ii) experience-guided exploration based on trajectory segmentation and malleability. Thirdly, we demonstrate that behavioural hints can be extracted on-the-fly to tackle highlyconstrained, ever-changing complex problems, from which there is no prior knowledge. We propose a multi-layer planner that first solves a simplified version of the problem at hand, to then inform the search for a solution in the constrained space. Our contributions on efficient motion synthesis via behaviour guidance augment the robots’ capabilities to deal with more complex planning problems, and do so more effectively than related approaches in the literature by computing better quality paths in lower response time. We demonstrate our contributions, in both laboratory experiments and field trials, on a spectrum of planning problems and robotic platforms ranging from high-dimensional humanoids and robotic arms with a focus on autonomous manipulation in resembling environments, to high-dimensional kinematic motion planning with a focus on autonomous safe navigation in unknown environments. While this thesis was motivated by challenges on motion synthesis, we have explored the applicability of our findings on disparate robotic fields, such as grasp and task planning. We have made some of our contributions open-source hoping they will be of use to the robotics community at large.The CDT in Robotics and Autonomous Systems at Heriot-Watt University and The University of EdinburghThe ORCA Hub EPSRC project (EP/R026173/1)The Scottish Informatics and Computer Science Alliance (SICSA
    corecore