41 research outputs found

    On the Interplay between Machine Learning, Population Pharmacokinetics, and Bioequivalence to Introduce Average Slope as a New Measure for Absorption Rate

    No full text
    The scientific basis for demonstrating bioequivalence between two drug products relies on the comparison of their extent and rate of absorption. For the absorption extent, the area under the C-t curve (AUCt) is used without a doubt. For absorption rate, the maximum observed plasma concentration (Cmax) is still suggested by the authorities, despite the numerous concerns. In this study, the concept of average slope (AS) is introduced as a metric to express the absorption rate of drugs. Principal component analysis and random forest models were applied to actual and simulated two × two crossover bioequivalence studies to show that AS expresses the appropriate properties for characterizing absorption rate. Several absorption kinetics (slow, typical, fast) and sampling schemes (sparse, typical, dense) were simulated. The two machine learning algorithms, applied to all these scenarios, proved the desired properties of AS while showing the non-desired performances of other metrics currently used or proposed in the literature. The estimation of AS does not require any assumptions, models, or transformations and is as simple as that of AUCt. A modified version of AS, termed “weighted AS”, is also introduced in order to place emphasis on early time points where the C-t profile describes more clearly the absorption process

    Machine Learning in Bioequivalence: Towards Identifying an Appropriate Measure of Absorption Rate

    No full text
    In this study, the modern tool of machine learning is used to address an old problem from a new perspective. Traditionally, the scientific basis for determining bioequivalence is based on a pharmacokinetic comparison, specifically the rate and extent of absorption between two products. Even though it is generally agreed that the peak plasma concentration (Cmax) should be used to measure the rate of absorption, several studies have raised concerns. Thus, alternative pharmacokinetic metrics have been proposed to address Cmax shortcomings. The aim of this study is to utilize unsupervised (principal component analysis) and supervised (random forest) machine learning algorithms to uncover the relationships among the pharmacokinetic parameters and identify the most suitable metric for absorption rate. One actual and three simulated donepezil bioequivalence datasets were utilized. For the needs of this study, a population pharmacokinetic model of donepezil was also developed and further used for the simulation of BE datasets with different absorption kinetics. Among the pharmacokinetic metrics explored, the newly proposed Cmax/Tmax ratio is also investigated. The latter was found to better reflect the absorption rate, regardless of the kinetic properties of absorption. This is one of the first studies utilizing machine learning in the field of bioequivalence

    On the heterogeneity of drug distribution and elimination processes in the body

    No full text
    In the present PhD thesis, pharmacokinetics is considered in the light of functional and structural heterogeneity of mammalian species. The thesis is divided into three sections. The first two sections deal with the heterogeneous features of drug distribution and the atypical pharmacokinetics of certain drugs. The third section deals with bioequivalence studies for drugs exhibiting “statistical” heterogeneity of pharmacokinetic characteristics, i.e. highly variable drugs. The contents of the nine chapters of the thesis are itemized as follows. The first, introductory, chapter describes the basic principles of pharmacokinetics and reviews the concepts of i) fractals and ii) kinetics in disordered media. In chapter 2, the concept of fractal volume of drug distribution (vf) is introduced to provide a more physiological description of the extent of drug distribution in the body. The novel term vf and the classic apparent volume of distribution (Vap) are further used in allometric studies. In the next chapter, the physiological meaning of vf is extended to express the drug body clearance. A new expression for clearance is defined which is called fractal clearance (CLf) for reasons of uniformity. Allometric equations are developed for CLf and conventional clearance (CL) using a large number of drugs. The predictive performance of Vap, CL, vf, CLf is also examined. Chapter 4 focuses on the development of quantitative structure - pharmacokinetic relationships for the proposed (vf, CLf) and the classic (Vap, CL) pharmacokinetic parameters. A variety of molecular descriptors for a large number of structurally diverse drugs are estimated and analyzed using multivariate statistics (principal component analysis and projection to latent structures). The aim of chapter 5 is to develop quantitative structure - pharmacokinetic relationships for a specific therapeutic drug category, the cephalosporins. In this analysis, apart from multivariate statistics, multiple regression models are also developed. Chapter 6 is focused on the problem of initial mixing of drugs. A physiologically based method is developed for the estimation of recirculatory parameters (e.g. cardiac output) using a partial differential diffusion - convection equation, which describes the initial mixing of drugs in the vascular bed. The proposed method is applied to experimental concentration-time data from bibliography. In chapter 7, stochastic models are used for the description of the atypical kinetics of amiodarone. Several stochastic models are developed and evaluated in comparison with a fractal-based method published recently. In chapter 8 the kinetics of mibefradil, which deviates from the typical non-linear Michaelis-Menten (MM) kinetics, is examined. The enzymatic reaction is considered to take place in disordered media and a modified expression of the MM equation is proposed since the MM “constant” behaves like a time coefficient in these media. Finally, in chapter 9 novel scaled bioequivalence limits are proposed based on intrasubject variability and the geometric mean ratio for the pharmacokinetic parameters of the test and reference formulation. The behavior of the novel methods is studied in comparison with other approaches based on scaled limits and the classic unscaled method.Στην παρούσα μελέτη, αναπτύχθηκε μια νέα λογική προκειμένου να σχεδιαστούν κλιμακούμενες μέθοδοι που να παρουσιάζουν υψηλή στατιστική ισχύ σε συνθήκες πραγματικής ΒΕ ενσωματώνοντας επίσης τα κριτήρια ενός αποτελεσματικού περιορισμού των αποδεκτών τιμών GMR. Τα όρια της ΒΕ κλιμακώνονται χρησιμοποιώντας ένα GMR-εξαρτώμενο πολλαπλάσιο της μεταβλητότητας. Εφαρμόζεται ένα κριτήριο περιορισμού που εκφράζεται, είτε ως σταθερό πολλαπλάσιο του στόχου ln(1.25), είτε ως συνάρτηση της τιμής GMR που υπολογίζεται στη μελέτη ΒΕ. Τα νέα όρια BELscG1 και BELsc1G2, σε αντίθεση με το κλασσικό αμετάβλητο BEL, γίνονται ευρύτερα καθώς η μεταβλητότητα αυξάνει, αλλά αυτή η διεύρυνση των ορίων ΒΕ είναι λιγότερο έντονη σε σχέση με τα άλλα κλιμακούμενα όρια ΒΕ που εξετάσθηκαν στην παρούσα μελέτη. Προκειμένου να βελτιωθεί η συμπεριφορά των κλιμακούμενων μεθόδων, είναι δυνατό να θεωρηθούν μία σειρά από διαφορετικές τιμές για τους παράγοντες k1 και k2 στην Εξ.9.7. Τα προκύπτοντα, με αυτόν τον τρόπο, όρια βιοϊσοδυναμίας θα παρουσιάζουν διαφορετική συμπεριφορά, αφού το k1 επηρεάζει την κλίση των κλιμακούμενων ορίων με το CV δηλαδή το βαθμό διεύρυνσης με το επίπεδο της μεταβλητότητας, ενώ το k2 έχει επιπτώσεις στον περιορισμό των αποδεκτών ακραίων τιμών του GMR. Για παράδειγμα μπορούν να σχεδιαστούν κλιμακούμενα όρια ΒΕ θεωρώντας N=32 ή N=36 σαν ένα τυπικό αριθμό ατόμων για την αξιολόγηση της βιοϊσοδυναμίας των φαρμάκων υψηλής μεταβλητότηας

    Introducing an Artificial Neural Network for Virtually Increasing the Sample Size of Bioequivalence Studies

    No full text
    Sample size is a key factor in bioequivalence and clinical trials. An appropriately large sample is necessary to gain valuable insights into a designated population. However, large sample sizes lead to increased human exposure, costs, and a longer time for completion. In a previous study, we introduced the idea of using variational autoencoders (VAEs), a type of artificial neural network, to synthetically create in clinical studies. In this work, we further elaborate on this idea and expand it in the field of bioequivalence (BE) studies. A computational methodology was developed, combining Monte Carlo simulations of 2 × 2 crossover BE trials with deep learning algorithms, specifically VAEs. Various scenarios, including variability levels, the actual sample size, the VAE-generated sample size, and the difference in performance between the two pharmaceutical products under comparison, were explored. All simulations showed that incorporating AI generative algorithms for creating virtual populations in BE trials has many advantages, as less actual human data can be used to achieve similar, and even better, results. Overall, this work shows how the application of generative AI algorithms, like VAEs, in clinical/bioequivalence studies can be a modern tool to significantly reduce human exposure, costs, and trial completion time

    Implementation of a Generative AI Algorithm for Virtually Increasing the Sample Size of Clinical Studies

    No full text
    Determining the appropriate sample size is crucial in clinical studies due to the potential limitations of small sample sizes in detecting true effects. This work introduces the use of Wasserstein Generative Adversarial Networks (WGANs) to create virtual subjects and reduce the need for recruiting actual human volunteers. The proposed idea suggests that only a small subset (“sample”) of the true population can be used along with WGANs to create a virtual population (“generated” dataset). To demonstrate the suitability of the WGAN-based approach, a new methodological procedure was also required to be established and applied. Monte Carlo simulations of clinical studies were performed to compare the performance of the WGAN-synthesized virtual subjects (i.e., the “generated” dataset) against both the entire population (the so-called “original” dataset) and a subset of it, the “sample”. After training and tuning the WGAN, various scenarios were explored, and the comparative performance of the three datasets was evaluated, as well as the similarity in the results against the population data. Across all scenarios tested, integrating WGANs and their corresponding generated populations consistently exhibited superior performance compared with those from samples alone. The generated datasets also exhibited quite similar performance compared with the “original” (i.e., population) data. By introducing virtual patients, WGANs effectively augment sample size, reducing the risk of type II errors. The proposed WGAN approach has the potential to decrease costs, time, and ethical concerns associated with human participation in clinical trials

    A physiologically based approach for the estimation of recirculatory parameters.

    No full text

    Bioequivalence Studies of Highly Variable Drugs: An Old Problem Addressed by Artificial Neural Networks

    No full text
    The bioequivalence (BE) of highly variable drugs is a complex issue in the pharmaceutical industry. The impact of this variability can significantly affect the required sample size and statistical power. In order to address this issue, the EMA and FDA propose the utilization of scaled limits. This study suggests the use of generative artificial intelligence (AI) algorithms, particularly variational autoencoders (VAEs), to virtually increase sample size and therefore reduce the need for actual human subjects in the BE studies of highly variable drugs. The primary aim of this study was to show the capability of using VAEs with constant acceptance limits (80–125%) and small sample sizes to achieve high statistical power. Monte Carlo simulations, incorporating two levels of stochasticity (between-subject and within-subject), were used to synthesize the virtual population. Various scenarios focusing on high variabilities were simulated. The performance of the VAE-generated datasets was compared to the official approaches imposed by the FDA and EMA, using either the constant 80–125% limits or scaled BE limits. To demonstrate the ability of AI generative algorithms to create virtual populations, no scaling was applied to the VAE-generated datasets, only to the actual data of the comparators. Across all scenarios, the VAE-generated datasets demonstrated superior performance compared to scaled or unscaled BE approaches, even with less than half of the typically required sample size. Overall, this study proposes the use of VAEs as a method to reduce the necessity of recruiting large numbers of subjects in BE studies

    3D-Printed Oral Dosage Forms: Mechanical Properties, Computational Approaches and Applications

    No full text
    The aim of this review is to present the factors influencing the mechanical properties of 3D-printed oral dosage forms. It also explores how it is possible to use specific excipients and printing parameters to maintain the structural integrity of printed drug products while meeting the needs of patients. Three-dimensional (3D) printing is an emerging manufacturing technology that is gaining acceptance in the pharmaceutical industry to overcome traditional mass production and move toward personalized pharmacotherapy. After continuous research over the last thirty years, 3D printing now offers numerous opportunities to personalize oral dosage forms in terms of size, shape, release profile, or dose modification. However, there is still a long way to go before 3D printing is integrated into clinical practice. 3D printing techniques follow a different process than traditional oral dosage from manufacturing methods. Currently, there are no specific guidelines for the hardness and friability of 3D printed solid oral dosage forms. Therefore, new regulatory frameworks for 3D-printed oral dosage forms should be established to ensure that they meet all appropriate quality standards. The evaluation of mechanical properties of solid dosage forms is an integral part of quality control, as tablets must withstand mechanical stresses during manufacturing processes, transportation, and drug distribution as well as rough handling by the end user. Until now, this has been achieved through extensive pre- and post-processing testing, which is often time-consuming. However, computational methods combined with 3D printing technology can open up a new avenue for the design and construction of 3D tablets, enabling the fabrication of structures with complex microstructures and desired mechanical properties. In this context, the emerging role of computational methods and artificial intelligence techniques is highlighted
    corecore