4,066 research outputs found

    Empirical Likelihood Estimation for Population Pharmacokinetic Study Based on Generalized Linear Model

    Get PDF
    To obtain efficient estimation of parameters is a major objective in population pharmacokinetic study. In this paper, we propose an empirical likelihood-based method to analyze the population pharmacokinetic data based on the generalized linear model. A nonparametric version of the Wilk's theorem for the limiting distributions of the empirical likelihood ratio is derived. Simulations are conducted to demonstrate the accuracy and efficiency of empirical likelihood method. An application illustrating our methods and supporting the simulation study results is presented. The results suggest that the proposed method is feasible for population pharmacokinetic data

    Estimation and Inference for a Spline-Enhanced Population Pharmacokinetic Model

    Full text link
    This article is motivated by an application where subjects were dosed three times with the same drug and the drug concentration profiles appeared to be the lowest after the third dose. One possible explanation is that the pharmacokinetic (PK) parameters vary over time. Therefore, we consider population PK models with time-varying PK parameters. These time-varying PK parameters are modeled by natural cubic spline functions in the ordinary differential equations. Mean parameters, variance components, and smoothing parameters are jointly estimated by maximizing the double penalized log likelihood. Mean functions and their derivatives are obtained by the numerical solution of ordinary differential equations. The interpretation of PK parameters in the model and its flexibility are discussed. The proposed methods are illustrated by application to the data that motivated this article. The model's performance is evaluated through simulation.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/65539/1/j.0006-341X.2002.00601.x.pd

    Maximum Entropy Technique and Regularization Functional for Determining the Pharmacokinetic Parameters in DCE-MRI

    Get PDF
    This paper aims to solve the arterial input function (AIF) determination in dynamic contrast-enhanced MRI (DCE-MRI), an important linear ill-posed inverse problem, using the maximum entropy technique (MET) and regularization functionals. In addition, estimating the pharmacokinetic parameters from a DCE-MR image investigations is an urgent need to obtain the precise information about the AIF-the concentration of the contrast agent on the left ventricular blood pool measured over time. For this reason, the main idea is to show how to find a unique solution of linear system of equations generally in the form of y = Ax + b, named an ill-conditioned linear system of equations after discretization of the integral equations, which appear in different tomographic image restoration and reconstruction issues. Here, a new algorithm is described to estimate an appropriate probability distribution function for AIF according to the MET and regularization functionals for the contrast agent concentration when applying Bayesian estimation approach to estimate two different pharmacokinetic parameters. Moreover, by using the proposed approach when analyzing simulated and real datasets of the breast tumors according to pharmacokinetic factors, it indicates that using Bayesian inference-that infer the uncertainties of the computed solutions, and specific knowledge of the noise and errors-combined with the regularization functional of the maximum entropy problem, improved the convergence behavior and led to more consistent morphological and functional statistics and results. Finally, in comparison to the proposed exponential distribution based on MET and Newton's method, or Weibull distribution via the MET and teaching-learning-based optimization (MET/TLBO) in the previous studies, the family of Gamma and Erlang distributions estimated by the new algorithm are more appropriate and robust AIFs

    Performance in population models for count data, part II: a new SAEM algorithm.

    Get PDF
    International audienceAnalysis of count data from clinical trials using mixed effect analysis has recently become widely used. However, algorithms available for the parameter estimation, including LAPLACE and Gaussian quadrature (GQ), are associated with certain limitations, including bias in parameter estimates and the long analysis runtime. The stochastic approximation expectation maximization (SAEM) algorithm has proven to be a very efficient and powerful tool in the analysis of continuous data. The aim of this study was to implement and investigate the performance of a new SAEM algorithm for application to count data. A new SAEM algorithm was implemented in MATLAB for estimation of both, parameters and the Fisher information matrix. Stochastic Monte Carlo simulations followed by re-estimation were performed according to scenarios used in previous studies (part I) to investigate properties of alternative algorithms (Plan et al., 2008, Abstr 1372 [ http://wwwpage-meetingorg/?abstract=1372 ]). A single scenario was used to explore six probability distribution models. For parameter estimation, the relative bias was less than 0.92% and 4.13% for fixed and random effects, for all models studied including ones accounting for over- or under-dispersion. Empirical and estimated relative standard errors were similar, with distance between them being <1.7% for all explored scenarios. The longest CPU time was 95 s for parameter estimation and 56 s for SE estimation. The SAEM algorithm was extended for analysis of count data. It provides accurate estimates of both, parameters and standard errors. The estimation is significantly faster compared to LAPLACE and GQ. The algorithm is implemented in Monolix 3.1, (beta-version available in July 2009)

    Hybrid statistical and mechanistic mathematical model guides mobile health intervention for chronic pain

    Full text link
    Nearly a quarter of visits to the Emergency Department are for conditions that could have been managed via outpatient treatment; improvements that allow patients to quickly recognize and receive appropriate treatment are crucial. The growing popularity of mobile technology creates new opportunities for real-time adaptive medical intervention, and the simultaneous growth of big data sources allows for preparation of personalized recommendations. Here we focus on the reduction of chronic suffering in the sickle cell disease community. Sickle cell disease is a chronic blood disorder in which pain is the most frequent complication. There currently is no standard algorithm or analytical method for real-time adaptive treatment recommendations for pain. Furthermore, current state-of-the-art methods have difficulty in handling continuous-time decision optimization using big data. Facing these challenges, in this study we aim to develop new mathematical tools for incorporating mobile technology into personalized treatment plans for pain. We present a new hybrid model for the dynamics of subjective pain that consists of a dynamical systems approach using differential equations to predict future pain levels, as well as a statistical approach tying system parameters to patient data (both personal characteristics and medication response history). Pilot testing of our approach suggests that it has significant potential to predict pain dynamics given patients' reported pain levels and medication usages. With more abundant data, our hybrid approach should allow physicians to make personalized, data driven recommendations for treating chronic pain.Comment: 13 pages, 15 figures, 5 table

    A comparison of different Bayesian design criteria to compute efficient conjoint choice experiments.

    Get PDF
    Bayesian design theory applied to nonlinear models is a promising route to cope with the problem of design dependence on the unknown parameters. The traditional Bayesian design criterion which is often used in the literature is derived from the second derivatives of the loglikelihood function. However, other design criteria are possible. Examples are design criteria based on the second derivative of the log posterior density, the expected posterior covariance matrix, or on the amount of information provided by the experiment. Not much is known in general about how well these criteria perform in constructing efficient designs and which criterion yields robust designs that are efficient for various parameter values. In this study, we apply these Bayesian design criteria to conjoint choice experimental designs and investigate how robust the resulting Bayesian optimal designs are with respect to other design criteria for which they were not optimized. We also examine the sensitivity of each design criterion to the prior distribution. Finally, we try to find out which design criterion is most appealing in a non-Bayesian framework where it is accepted that prior information must be used for design but should not be used in the analysis, and which one is most appealing in a Bayesian framework when the prior distribution is taken into account both for design and for analysis.Bayesian design criterion; Posterior density; Expected posterior covariance matrix; Conjoint choice design; Laplace approximation; Fisher information;

    Simulation and Parametric Inference of a Mixed Effects Model with Stochastic Differential Equations Using the Fokker-Planck Equation Solution

    Get PDF
    This chapter is concerned with estimation method for multidimensional and nonlinear dynamical models including stochastic differential equations containing random effects (random parameters). This type of model has proved useful for describing continuous random processes, for distinguishing intra- and interindividual variability as well as for accounting for uncertainty in the dynamic model itself. Pharmacokinetic/pharmacodynamic modeling often involves repeated measurements on a series of experimental units, and random effects are incorporated into the model to simulate the individual behavior in the entire population. Unfortunately, the estimation of this kind of models could involve some difficulties, because in most cases, the transition density of the diffusion process given the random effects is not available. In this work, we focus on the approximation of the transition density of a such process in a closed form in order to obtain parameter estimates in this kind of model, using the Fokker-Planck equation and the Risken approximation. In addition, the chapter discusses a simulation study using Markov Chain Monte Carlo simulation, to provide results of the proposed methodology and to illustrate an application of mixed effects models with SDEs in the epidemiology using the minimal model describing glucose-insulin kinetics
    • …
    corecore