737 research outputs found

    Computer-aided modeling for efficient and innovative product-process engineering

    Get PDF
    Model baserede computer understøttet produkt process engineering har opnået øget betydning i forskelligste industrielle brancher som for eksampel farmaceutisk produktion, petrokemi, finkemikalier, polymerer, bioteknologi, fødevarer, energi og vand. Denne trend er forventet at fortsætte på grund af substantielle fordele, hvilke computer understøttede metoder medfører. Den primære forudsætning af computer understøttet produkt process engineering erselvfølgelig den tilgængelighed af modeller af forskellige typer, former og anvendelser. Udviklingen af den påkrævet modellen for de undersøgte systemer er normalt en tidskrævende udfordring og derfor mest også dyrt. Den involverer forskelligste trin, fagekspert viden og dygtighed og forskellige modellerings værktøjer. Formålet af dette projekt er at systematisere den model udviklings proces og anvendelse og dermed øge effektiviteten af modeller såvel somkvaliteten. Den væsentlige bidrag af denne PhD afhandling er en generisk metodologi for proces model udviklingen og anvendelse i kombination med grundige algoritmiske arbejdes diagrammer for de forskellige involverede modeller opgaver og udviklingen af computer understøttede modeller rammer hvilke er strukturbaseret på den generiske metodologi, delvis automatiseret i de forskellige arbejdstrin og kombinerer alle påkrævet værktøjer, understøttelseog vejledning for de forskellige arbejdstrin. Understøttede modelleringsopgaver er etableringen af modeller mål, indsamling af de nødvendige informationer, model formulering inklusive numeriske analyser, etablering af løsningsstrategier og forbinding med den passende løsningsmodul, model identificering og sondering såvel som model anvendelse for simulation og optimering. Den computer understøttede modeller ramme blev implementeret i en brugervenlig software. En række forskellige demonstrationseksempler fra forskellige områder i kemisk ogbiokemiske engineering blev løst for udvikling og validering af den generiske modellerings metodologi og den computer understøttet modeller ramme anvendt på den udviklet software værktøj.Model-based computer aided product-process engineering has attained increased importance in a number of industries, including pharmaceuticals, petrochemicals, fine chemicals, polymers, biotechnology, food, energy and water. This trend is set to continue due to the substantial benefits computer-aided methods provide. The key prerequisite of computer-aided productprocess engineering is however the availability of models of different types, forms andapplication modes. The development of the models required for the systems under investigation tends to be a challenging, time-consuming and therefore cost-intensive task involving numerous steps, expert skills and different modelling tools. The objective of this project is to systematize the process of model development and application thereby increasing the efficiency of the modeller as well as model quality.The main contributions of this thesis are a generic methodology for the process of model development and application, combining in-depth algorithmic work-flows for the different modelling tasks involved and the development of a computer-aided modelling framework. This framework is structured, is based on the generic modelling methodology, partially automates the involved work-flows by integrating the required tools and, supports and guides the userthrough the different work-flow steps. Supported modelling tasks are the establishment of the modelling objective, the collection of the required system information, model construction including numerical analysis, derivation of solution strategy and connection to appropriate solvers, model identification/ discrimination as well as model application for simulation and optimization. The computer-aided modelling framework has been implemented into an userfriendlysoftware.A variety of case studies from different areas in chemical and biochemical engineering have been solved to illustrate the application of the generic modelling methodology, the computeraided modelling framework and the developed software tool

    Nonparametric goodness-of-fit testing for parametric covariate models in pharmacometric analyses

    Get PDF
    The characterization of covariate effects on model parameters is a crucial step during pharmacokinetic/pharmacodynamic analyses. While covariate selection criteria have been studied extensively, the choice of the functional relationship between covariates and parameters, however, has received much less attention. Often, a simple particular class of covariate-to-parameter relationships (linear, exponential, etc.) is chosen ad hoc or based on domain knowledge, and a statistical evaluation is limited to the comparison of a small number of such classes. Goodness-of-fit testing against a nonparametric alternative provides a more rigorous approach to covariate model evaluation, but no such test has been proposed so far. In this manuscript, we derive and evaluate nonparametric goodness-of-fit tests for parametric covariate models, the null hypothesis, against a kernelized Tikhonov regularized alternative, transferring concepts from statistical learning to the pharmacological setting. The approach is evaluated in a simulation study on the estimation of the age-dependent maturation effect on the clearance of a monoclonal antibody. Scenarios of varying data sparsity and residual error are considered. The goodness-of-fit test correctly identified misspecified parametric models with high power for relevant scenarios. The case study provides proof-of-concept of the feasibility of the proposed approach, which is envisioned to be beneficial for applications that lack well-founded covariate models

    Regularization and Bayesian Learning in Dynamical Systems: Past, Present and Future

    Full text link
    Regularization and Bayesian methods for system identification have been repopularized in the recent years, and proved to be competitive w.r.t. classical parametric approaches. In this paper we shall make an attempt to illustrate how the use of regularization in system identification has evolved over the years, starting from the early contributions both in the Automatic Control as well as Econometrics and Statistics literature. In particular we shall discuss some fundamental issues such as compound estimation problems and exchangeability which play and important role in regularization and Bayesian approaches, as also illustrated in early publications in Statistics. The historical and foundational issues will be given more emphasis (and space), at the expense of the more recent developments which are only briefly discussed. The main reason for such a choice is that, while the recent literature is readily available, and surveys have already been published on the subject, in the author's opinion a clear link with past work had not been completely clarified.Comment: Plenary Presentation at the IFAC SYSID 2015. Submitted to Annual Reviews in Contro

    A new framework for assessing subject-specific whole brain circulation and perfusion using MRI-based measurements and a multi-scale continuous flow model

    Get PDF
    A large variety of severe medical conditions involve alterations in microvascular circulation. Hence, measurements or simulation of circulation and perfusion has considerable clinical value and can be used for diagnostics, evaluation of treatment efficacy, and for surgical planning. However, the accuracy of traditional tracer kinetic one-compartment models is limited due to scale dependency. As a remedy, we propose a scale invariant mathematical framework for simulating whole brain perfusion. The suggested framework is based on a segmentation of anatomical geometry down to imaging voxel resolution. Large vessels in the arterial and venous network are identified from time-of-flight (ToF) and quantitative susceptibility mapping (QSM). Macro-scale flow in the large-vessel-network is accurately modelled using the Hagen-Poiseuille equation, whereas capillary flow is treated as two-compartment porous media flow. Macro-scale flow is coupled with micro-scale flow by a spatially distributing support function in the terminal endings. Perfusion is defined as the transition of fluid from the arterial to the venous compartment. We demonstrate a whole brain simulation of tracer propagation on a realistic geometric model of the human brain, where the model comprises distinct areas of grey and white matter, as well as large vessels in the arterial and venous vascular network. Our proposed framework is an accurate and viable alternative to traditional compartment models, with high relevance for simulation of brain perfusion and also for restoration of field parameters in clinical brain perfusion applications.publishedVersio

    Carbon transit through degradation networks

    Get PDF
    The decay of organic matter in natural ecosystems is controlled by a network of biologically, physically, and chemically driven processes. Decomposing organic matter is often described as a continuum that transforms and degrades over a wide range of rates, but it is difficult to quantify this heterogeneity in models. Most models of carbon degradation consider a network of only a few organic matter states that transform homogeneously at a single rate. These models may fail to capture the range of residence times of carbon in the soil organic matter continuum. Here we assume that organic matter is distributed among a continuous network of states that transform with stochastic, heterogeneous kinetics. We pose and solve an inverse problem in order to identify the rates of carbon exiting the underlying degradation network (exit rates) and apply this approach to plant matter decay throughout North America. This approach provides estimates of carbon retention in the network without knowing the details of underlying state transformations. We find that the exit rates are approximately lognormal, suggesting that carbon flow through a complex degradation network can be described with just a few parameters. These results indicate that the serial and feedback processes in natural degradation networks can be well approximated by a continuum of parallel decay rates.National Science Foundation (U.S.) (Grant EAR-0420592)United States. National Aeronautics and Space Administration (Grant NNA08CN84A

    ROCKETSHIP: a flexible and modular software tool for the planning, processing and analysis of dynamic MRI studies

    Get PDF
    Background: Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is a promising technique to characterize pathology and evaluate treatment response. However, analysis of DCE-MRI data is complex and benefits from concurrent analysis of multiple kinetic models and parameters. Few software tools are currently available that specifically focuses on DCE-MRI analysis with multiple kinetic models. Here, we developed ROCKETSHIP, an open-source, flexible and modular software for DCE-MRI analysis. ROCKETSHIP incorporates analyses with multiple kinetic models, including data-driven nested model analysis. Results: ROCKETSHIP was implemented using the MATLAB programming language. Robustness of the software to provide reliable fits using multiple kinetic models is demonstrated using simulated data. Simulations also demonstrate the utility of the data-driven nested model analysis. Applicability of ROCKETSHIP for both preclinical and clinical studies is shown using DCE-MRI studies of the human brain and a murine tumor model. Conclusion: A DCE-MRI software suite was implemented and tested using simulations. Its applicability to both preclinical and clinical datasets is shown. ROCKETSHIP was designed to be easily accessible for the beginner, but flexible enough for changes or additions to be made by the advanced user as well. The availability of a flexible analysis tool will aid future studies using DCE-MRI

    Structuring the Unstructured: Unlocking pharmacokinetic data from journals with Natural Language Processing

    Get PDF
    The development of a new drug is an increasingly expensive and inefficient process. Many drug candidates are discarded due to pharmacokinetic (PK) complications detected at clinical phases. It is critical to accurately estimate the PK parameters of new drugs before being tested in humans since they will determine their efficacy and safety outcomes. Preclinical predictions of PK parameters are largely based on prior knowledge from other compounds, but much of this potentially valuable data is currently locked in the format of scientific papers. With an ever-increasing amount of scientific literature, automated systems are essential to exploit this resource efficiently. Developing text mining systems that can structure PK literature is critical to improving the drug development pipeline. This thesis studied the development and application of text mining resources to accelerate the curation of PK databases. Specifically, the development of novel corpora and suitable natural language processing architectures in the PK domain were addressed. The work presented focused on machine learning approaches that can model the high diversity of PK studies, parameter mentions, numerical measurements, units, and contextual information reported across the literature. Additionally, architectures and training approaches that could efficiently deal with the scarcity of annotated examples were explored. The chapters of this thesis tackle the development of suitable models and corpora to (1) retrieve PK documents, (2) recognise PK parameter mentions, (3) link PK entities to a knowledge base and (4) extract relations between parameter mentions, estimated measurements, units and other contextual information. Finally, the last chapter of this thesis studied the feasibility of the whole extraction pipeline to accelerate tasks in drug development research. The results from this thesis exhibited the potential of text mining approaches to automatically generate PK databases that can aid researchers in the field and ultimately accelerate the drug development pipeline. Additionally, the thesis presented contributions to biomedical natural language processing by developing suitable architectures and corpora for multiple tasks, tackling novel entities and relations within the PK domain
    corecore