12 research outputs found
Bond graph based sensitivity and uncertainty analysis modelling for micro-scale multiphysics robust engineering design
Components within micro-scale engineering systems are often at the limits of commercial miniaturization and this can cause unexpected behavior and variation in performance. As such, modelling and analysis of system robustness plays an important role in product development. Here schematic bond graphs are used as a front end in a sensitivity analysis based strategy for modelling robustness in multiphysics micro-scale engineering systems. As an example, the analysis is applied to a behind-the-ear (BTE) hearing aid.
By using bond graphs to model power flow through components within different physical domains of the hearing aid, a set of differential equations to describe the system dynamics is collated. Based on these equations, sensitivity analysis calculations are used to approximately model the nature and the sources of output uncertainty during system operation. These calculations represent a robustness evaluation of the current hearing aid design and offer a means of identifying potential for improved designs of multiphysics systems by way of key parameter identification
Joining Forces of Bayesian and Frequentist Methodology: A Study for Inference in the Presence of Non-Identifiability
Increasingly complex applications involve large datasets in combination with
non-linear and high dimensional mathematical models. In this context,
statistical inference is a challenging issue that calls for pragmatic
approaches that take advantage of both Bayesian and frequentist methods. The
elegance of Bayesian methodology is founded in the propagation of information
content provided by experimental data and prior assumptions to the posterior
probability distribution of model predictions. However, for complex
applications experimental data and prior assumptions potentially constrain the
posterior probability distribution insufficiently. In these situations Bayesian
Markov chain Monte Carlo sampling can be infeasible. From a frequentist point
of view insufficient experimental data and prior assumptions can be interpreted
as non-identifiability. The profile likelihood approach offers to detect and to
resolve non-identifiability by experimental design iteratively. Therefore, it
allows one to better constrain the posterior probability distribution until
Markov chain Monte Carlo sampling can be used securely. Using an application
from cell biology we compare both methods and show that a successive
application of both methods facilitates a realistic assessment of uncertainty
in model predictions.Comment: Article to appear in Phil. Trans. Roy. Soc.
Likelihood based observability analysis and confidence intervals for predictions of dynamic models
Mechanistic dynamic models of biochemical networks such as Ordinary
Differential Equations (ODEs) contain unknown parameters like the reaction rate
constants and the initial concentrations of the compounds. The large number of
parameters as well as their nonlinear impact on the model responses hamper the
determination of confidence regions for parameter estimates. At the same time,
classical approaches translating the uncertainty of the parameters into
confidence intervals for model predictions are hardly feasible.
In this article it is shown that a so-called prediction profile likelihood
yields reliable confidence intervals for model predictions, despite arbitrarily
complex and high-dimensional shapes of the confidence regions for the estimated
parameters. Prediction confidence intervals of the dynamic states allow a
data-based observability analysis. The approach renders the issue of sampling a
high-dimensional parameter space into evaluating one-dimensional prediction
spaces. The method is also applicable if there are non-identifiable parameters
yielding to some insufficiently specified model predictions that can be
interpreted as non-observability. Moreover, a validation profile likelihood is
introduced that should be applied when noisy validation experiments are to be
interpreted.
The properties and applicability of the prediction and validation profile
likelihood approaches are demonstrated by two examples, a small and instructive
ODE model describing two consecutive reactions, and a realistic ODE model for
the MAP kinase signal transduction pathway. The presented general approach
constitutes a concept for observability analysis and for generating reliable
confidence intervals of model predictions, not only, but especially suitable
for mathematical models of biological systems
Division of labor by dual feedback regulators controls JAK2/STAT5 signaling over broad ligand range
Quantitative analysis of time-resolved data in primary erythroid progenitor cells reveals that a dual negative transcriptional feedback mechanism underlies the ability of STAT5 to respond to the broad spectrum of physiologically relevant Epo concentrations
Modeling the Quorum Sensing Signaling Regulatory Network in \u3cem\u3eVibrio fischeri\u3c/em\u3e
Quorum sensing is a mechanism by which bacteria can sense the levels of signaling molecules and respond by controlling the expression of target genes. The marine bacterium, Vibrio fischeri, has been extensively studied as a model for the quorum sensing mechanism in Gram-negative bacteria. In order to systematically investigate the quorum sensing regulatory network in V. fischeri, a conceptual model was first established based on the existing knowledge. Next, molecular microbiology and bioinformatics techniques were employed to both qualitatively and quantitatively characterize the system. These techniques included the quantification of the 3-oxo-C6-HSL concentrations in the cell culture supernatant using a bioluminescent bioreporter strain of E. coli, the measurements of the messenger RNA levels of quorum sensing genes (luxI, luxR, ainS and litR) using the reverse transcription-polymerase chain reaction (RT-PCR), as well as the sequence analysis of the promoter regions of quorum sensing related genes. A mathematical model composed of ordinary differential equations was created to characterize the regulatory process. The simulated annealing method was used to minimize the weighted discrepancy between the modeling output and the experimental data with correlations ranging from 0.85 to 0.99. This study, mathematically modeled the comprehensive quorum sensing regulatory system, which encompasses 3-oxo-C6-HSL, lux operon (luxR and luxICDABEG), C8-HSL, ainS, ainR, luxO, and litR, and can benefit the understanding of dozens of similar quorum sensing regulatory systems
Model-based analysis as a tool for intensification of a biocatalytic process in a microreactor
Chiral amines are highly valuable functionalised molecules which play an important role in the pharmaceutical, agrochemical and chemical industry. To produce these interesting compounds, chemical synthesis pathways are typically used. However, these chemical methods are operated under high temperatures and pressures, are air- and water-sensitive, and need highly flammable metal-organic reagents or heavy metals. The chemical approach thus requires specialised (and expensive) equipment and has a large environmental impact. To overcome these drawbacks, enzymatic processes have recently received increased attention to produce these chiral amines. However, the low productivity of enzymatic processes hampers the widespread industrial implementation of such enzymatic processes.
In this dissertation, the aim is to make the enzymatic production of chiral amines more productive by using model-based analyses which allow to build process knowledge. First, the kinetic behaviour of the enzyme (i.e. ω-transaminase) is identified using an optimal experimental design approach, allowing a more accurate estimation of the kinetic parameters from the experimental data. Second, a generic methodology is developed to identify mass transfer limitations in microreactors. These mass transfer limitations reduce the reactor performance and should therefore be minimised. The use of the generic methodology allows to estimate productivity losses due to these mass transfer limitations. Finally, the estimation of kinetic parameters under mass transfer limited conditions is investigated. It is shown that accurate kinetic parameter estimates can be obtained under mass transfer limited conditions, but this is highly dependent on the experimental design.
The results of this dissertation allow to speed up kinetic characterisation of enzymes and to improve overall productivity by reducing mass transfer limitations
Simplification de modèles mathématiques représentant des cultures cellulaires
L’utilisation de cellules vivantes dans un procédé industriel tire profit de la complexité inhérente au vivant pour accomplir des tâches complexes et dont la compréhension est parfois limitée. Que ce soit pour la production de biomasse, pour la production de molécules d’intérêt ou pour la décomposition de molécules indésirables, ces procédés font appel aux multiples réactions formant le métabolisme cellulaire. Afin de décrire l’évolution de ces systèmes, des modèles mathématiques composés d’un ensemble d’équations différentielles sont utilisés. Au fur et à mesure que les connaissances du métabolisme se sont développées, les modèles mathématiques le représentant se sont complexifiés. Le niveau de complexité requis pour expliquer les phénomènes en jeu lors d’un procédé spécifique est difficile à définir. Ainsi, lorsqu’on tente de modéliser un nouveau procédé, la sélection du modèle à utiliser peut être problématique. Une des options intéressantes est la sélection d’un modèle provenant de la littérature et adapté au procédé utilisé. L’information contenue dans le modèle doit alors être évaluée en fonction des phénomènes observables dans les conditions d’opération. Souvent, les modèles provenant de la littérature sont surparamétrés pour l’utilisation dans les conditions d’opération des procédés ciblées. Cela fait en sorte de causer des problèmes d’identifiabilité des paramètres. De plus, l’ensemble des variables d’état utilisées dans le modèle n’est pas nécessairement mesuré dans les conditions d’opération normales. L’objectif de ce projet est de cibler l’information utilisable contenue dans les modèles par la simplification méthodique de ceux-ci. En effet, la simplification des modèles permet une meilleure compréhension des dynamiques à l’oeuvre dans le procédé. Ce projet a permis de définir et d’évaluer trois méthodes de simplification de modèles mathématiques servant à décrire un procédé de culture cellulaire. La première méthode est basée sur l’application de critères sur les différents éléments du modèle, la deuxième est basée sur l’utilisation d’un critère d’information du type d’Akaike et la troisième considère la réduction d’ordre du modèle par retrait de variables d’état. Les résultats de ces méthodes de simplification sont présentés à l’aide de quatre modèles cellulaires provenant de la littérature