74 research outputs found

    Semi-blind robust indentification and robust control approach to personalized anemia management.

    Get PDF
    The homeostatic blood hemoglobin (Hb) content of a healthy individual varies between the range of 14-18 g/dL for a male and 12-16 g/dL for a female. This quantity provides an estimate of red blood cell (RBC) count in circulation at any given moment. RBC is a protein carrying substance that transports oxygen from the lungs to other tissues in the body and is synthesized by the kidney through a process known as erythropoiesis where erythropoietin is secreted in response to hypoxia. In this regard, the kidneys act not only as a controller but also as a sensor in regulating RBC levels. Patients with chronic kidney diseases (CKD) have dysfunctional kidneys that compromise these fundamental kidney functions. Consequently, anemia is developed. Anemics of CKD have low levels of Hb that must be controlled and properly regulated to the appropriate therapeutic range. Until the discovery of recombinant human erythropoietin (EPO) over three decades ago, treatment procedure of anemia conditions primarily involved repeated blood transfusions–a process known to be associated with several other health related complications. This discovery resulted in a paradigm shift in anemia management from blood transfusions to dosage therapies. The main objective of anemia management with EPO is to increase patients’ hemoglobin level from low to a suitable therapeutic range as defined by the National Kidney Foundation-Kidney Disease Outcomes Quality Initiative (NKF-KDOI) to be in the range of 10 - 12 g/dL while avoiding response values beyond 14 g/dL to prevent other complications associated with EPO medication. It is therefore imperative that clinicians balance dosage efficacy and toxicity in anemia management therapies. At most treatment facilities, protocols are developed to conform to NKF-KDOI recommendations. These protocols are generally based on EPO packet inserts and the expected Hb responses from the average patient. The inevitable variability within the patient group makes this “one-size-fits-all” dosing scheme non-optimal, at best, and potentially dangerous for certain group of patients that do not adhere to the notion of expected “average” response. A dosing strategy that is tailored to the individual patients’ response to EPO medication could provide a better alternative to the current treatment methods. An objective of this work is to develop EPO dosing strategies tailored to the individual patients using robust identification techniques and modern feedback control methods. First, a unique model is developed based on Hb responses and dosage EPO of the individual patients using semi-blind robust identification techniques. This provides a nominal model and a quantitative information on model uncertainty that accounts for other possible patient’s dynamics not considered in the modeling process. This is in the framework of generalized interpolation theory. Then, from the derived nominal model and the associated uncertainty information, robust controller is designed via the =H1-synthesis methods to provide a new dosing strategies for the individual patients. The H1 control theory has a feature of minimizing the influence of some unknown worst case gain disturbance on a system. Finally, a framework is provided to strategize dosing protocols for newly admitted patients

    A radial basis function method for solving optimal control problems.

    Get PDF
    This work presents two direct methods based on the radial basis function (RBF) interpolation and arbitrary discretization for solving continuous-time optimal control problems: RBF Collocation Method and RBF-Galerkin Method. Both methods take advantage of choosing any global RBF as the interpolant function and any arbitrary points (meshless or on a mesh) as the discretization points. The first approach is called the RBF collocation method, in which states and controls are parameterized using a global RBF, and constraints are satisfied at arbitrary discrete nodes (collocation points) to convert the continuous-time optimal control problem to a nonlinear programming (NLP) problem. The resulted NLP is quite sparse and can be efficiently solved by well-developed sparse solvers. The second proposed method is a hybrid approach combining RBF interpolation with Galerkin error projection for solving optimal control problems. The proposed solution, called the RBF-Galerkin method, applies a Galerkin projection to the residuals of the optimal control problem that make them orthogonal to every member of the RBF basis functions. Also, RBF-Galerkin costate mapping theorem will be developed describing an exact equivalency between the Karush–Kuhn–Tucker (KKT) conditions of the NLP problem resulted from the RBF-Galerkin method and discretized form of the first-order necessary conditions of the optimal control problem, if a set of conditions holds. Several examples are provided to verify the feasibility and viability of the RBF method and the RBF-Galerkin approach as means of finding accurate solutions to general optimal control problems. Then, the RBF-Galerkin method is applied to a very important drug dosing application: anemia management in chronic kidney disease. A multiple receding horizon control (MRHC) approach based on the RBF-Galerkin method is developed for individualized dosing of an anemia drug for hemodialysis patients. Simulation results are compared with a population-oriented clinical protocol as well as an individual-based control method for anemia management to investigate the efficacy of the proposed method

    Profiled support vector machines for antisense oligonucleotide efficacy prediction

    Get PDF
    BACKGROUND: This paper presents the use of Support Vector Machines (SVMs) for prediction and analysis of antisense oligonucleotide (AO) efficacy. The collected database comprises 315 AO molecules including 68 features each, inducing a problem well-suited to SVMs. The task of feature selection is crucial given the presence of noisy or redundant features, and the well-known problem of the curse of dimensionality. We propose a two-stage strategy to develop an optimal model: (1) feature selection using correlation analysis, mutual information, and SVM-based recursive feature elimination (SVM-RFE), and (2) AO prediction using standard and profiled SVM formulations. A profiled SVM gives different weights to different parts of the training data to focus the training on the most important regions. RESULTS: In the first stage, the SVM-RFE technique was most efficient and robust in the presence of low number of samples and high input space dimension. This method yielded an optimal subset of 14 representative features, which were all related to energy and sequence motifs. The second stage evaluated the performance of the predictors (overall correlation coefficient between observed and predicted efficacy, r; mean error, ME; and root-mean-square-error, RMSE) using 8-fold and minus-one-RNA cross-validation methods. The profiled SVM produced the best results (r = 0.44, ME = 0.022, and RMSE= 0.278) and predicted high (>75% inhibition of gene expression) and low efficacy (<25%) AOs with a success rate of 83.3% and 82.9%, respectively, which is better than by previous approaches. A web server for AO prediction is available online at . CONCLUSIONS: The SVM approach is well suited to the AO prediction problem, and yields a prediction accuracy superior to previous methods. The profiled SVM was found to perform better than the standard SVM, suggesting that it could lead to improvements in other prediction problems as well

    Minding impacting events in a model of stochastic variance

    Get PDF
    We introduce a generalisation of the well-known ARCH process, widely used for generating uncorrelated stochastic time series with long-term non-Gaussian distributions and long-lasting correlations in the (instantaneous) standard deviation exhibiting a clustering profile. Specifically, inspired by the fact that in a variety of systems impacting events are hardly forgot, we split the process into two different regimes: a first one for regular periods where the average volatility of the fluctuations within a certain period of time is below a certain threshold and another one when the local standard deviation outnumbers it. In the former situation we use standard rules for heteroscedastic processes whereas in the latter case the system starts recalling past values that surpassed the threshold. Our results show that for appropriate parameter values the model is able to provide fat tailed probability density functions and strong persistence of the instantaneous variance characterised by large values of the Hurst exponent is greater than 0.8, which are ubiquitous features in complex systems.Comment: 18 pages, 5 figures, 1 table. To published in PLoS on

    Anemia management in end stage renal disease patients undergoing dialysis: a comprehensive approach through machine learning techniques and mathematical modeling

    Get PDF
    Kidney impairment has global consequences in the organism homeostasis and a disorder like Chronic Kidney Disease (CKD) might eventually exacerbates into End Stage Renal Disease (ESRD) where a complete renal replacement therapy like dialysis is necessary. Dialysis partially reintegrates the blood ltration process; however, even when it is associated to a pharmacological therapy, this is not su fficient to completely replace the renal endocrine role and causes the development of common complications, like CKD secondary anemia (CKD-anemia) The availability of exogenous Erythropoiesis Stimulating Agents (ESA, synthetic molecules with similar structure and same mechanism of action as human erythropoietin) improved the treatment of CKD-anemia although the clinical outcomes are still not completely successful. In particular, for ERSD dialysis patients main di culties in the selection of an optimal therapy dosing derive from the high intra- and inter-individual response variability and the temporal discrepancy between the short ESA permanence in the blood (hours) and the long Red Blood Cells lifespan (months). The aim of this thesis has been to describe the development of the Anemia Control Model (ACM), a tool designed to support physicians in managing anemia for ESRD patines undergoing dialysis. Five main pillars constitute the foundation of this work: - Understanding the medical problem; - Availability of the data needed to derive the models; - Mathematical and Machine Learning modeling; - Development of a product usable at the point of care; - Medical device certi cation and clinical evaluation of the developed product. The understanding of the medical problem is fundamental for two reasons: firstly because the medical problem must be the driver of the product scope and consequently of its design; secondly because a good understanding of the medical problem is of fundamental importance to develop optimized models. In the case of anemia management the drug dosing is an important task where predictive models could support physicians to improve the treatment quality. In particular, considering that hemoglobin is the typical parameter used to measure anemia, our model were tailored to predict hemoglobin response to the two main drugs normally used to correct anemia, that is ESA and Iron. In a mathematical model based on di erential equations, like the one presented in this thesis, the knowledge of the main physiological processes related to anemia is the base to properly design the equations. A machine learning approach in principle can be built with no hypotesis, because it relays in learning from data, nevertheless knowledge of the domain helps to make better use of the available data. The medical problem has been discussed in Chapter 1. The availability of a huge database of very well structured data was basic for the development of models. Quality of the data is another important aspect. Chapter 2 gives the reader an overview of the available data.. The core of the ACM is the capability to predict for each patient the future hemoglobin concentrations as a function of past patient's clinical history and future drug prescription. By means of well performing and personalized predictive model it is possible to simulate how, for each specific c patient, di erent doses would a ffect hemoglobin trends. Mathematical and machine learning models present both advantages and limitations. Chapter 3 describes the mathematical model and analyzes its performances, while Chapter 4 is dedicated to the machine learning models. In our case the machine learning approach resulted more suitable for our scope, because its was well performing on the entire population, more stable and, once trained, very quick in elaborating the prediction. Once the predictive model was obtained, the next step was to wrap it into a service that could be consumed by a third party system (for example an app or a clinical system) where physicians could benefi t from the model prediction capability. To achieve that, firstly an algorithm for the dose selection was developed; secondly, a data structure for the communication with the third party system was defi ned; fi nally, the whole package was wrapped in a web service. These arguments have been discussed in the rst part of Chapter 5. Mistakes in ESA or Iron dosing might have serious consequences on patients' health, for this reason ACM intended use was limited to provide dose suggestions only; physicians must evaluate them and decide whether to accept or reject them. Nevertheless, such a tool could be considered as Medical Device under European Medical Device Directive (MDD); for this reason, to be on the safe side, it was decided to certify the ACM as medical device. A novel approach was developed to perform the risk assessment, the main idea being that ACM might generate risks when a dose suggestion is produced based on a wrong prediction. To assess this risk the model error distribution over the test set was utilized as estimation of the error distribution of the live system. Finally, a clinical evaluation of the ACM in three pilot clinics has been performed before deciding to roll-out the tool in more clinics. These arguments have been discussed in the second part of Chapter 5

    ADME Profiling in Drug Discovery and a New Path Paved on Silica

    Get PDF
    The drug discovery and development pipeline have more and more relied on in vitro testing and in silico predictions to reduce investments and optimize lead compounds. A comprehensive set of in vitro assays is available to determine key parameters of absorption, distribution, metabolism, and excretion, for example, lipophilicity, solubility, and plasma stability. Such test systems aid the evaluation of the pharmacological properties of a compound and serve as surrogates before entering in vivo testing and clinical trials. Nowadays, computer-aided techniques are employed not just in the discovery of new lead compounds but embedded as part of the entire drug development process where the ADME profiling and big data analyses add a new layer of complexity to those systems. Herein, we give a short overview of the history of the drug development pipeline presenting state-of-the-art ADME in vitro assays as established in academia and industry. We will further introduce the underlying good practices and give an example of the compound development pipeline. In the next step, recent advances at in silico techniques will be highlighted with special emphasis on how pharmacogenomics and in silico PK profiling can enhance drug monitoring and individualization of drug therapy

    Volatility forecasting

    Get PDF
    Volatility has been one of the most active and successful areas of research in time series econometrics and economic forecasting in recent decades. This chapter provides a selective survey of the most important theoretical developments and empirical insights to emerge from this burgeoning literature, with a distinct focus on forecasting applications. Volatility is inherently latent, and Section 1 begins with a brief intuitive account of various key volatility concepts. Section 2 then discusses a series of different economic situations in which volatility plays a crucial role, ranging from the use of volatility forecasts in portfolio allocation to density forecasting in risk management. Sections 3, 4 and 5 present a variety of alternative procedures for univariate volatility modeling and forecasting based on the GARCH, stochastic volatility and realized volatility paradigms, respectively. Section 6 extends the discussion to the multivariate problem of forecasting conditional covariances and correlations, and Section 7 discusses volatility forecast evaluation methods in both univariate and multivariate cases. Section 8 concludes briefly. JEL Klassifikation: C10, C53, G1

    Volatility Forecasting

    Get PDF
    Volatility has been one of the most active and successful areas of research in time series econometrics and economic forecasting in recent decades. This chapter provides a selective survey of the most important theoretical developments and empirical insights to emerge from this burgeoning literature, with a distinct focus on forecasting applications. Volatility is inherently latent, and Section 1 begins with a brief intuitive account of various key volatility concepts. Section 2 then discusses a series of different economic situations in which volatility plays a crucial role, ranging from the use of volatility forecasts in portfolio allocation to density forecasting in risk management. Sections 3,4 and 5 present a variety of alternative procedures for univariate volatility modeling and forecasting based on the GARCH, stochastic volatility and realized volatility paradigms, respectively. Section 6 extends the discussion to the multivariate problem of forecasting conditional covariances and correlations, and Section 7 discusses volatility forecast evaluation methods in both univariate and multivariate cases. Section 8 concludes briefly.
    corecore