40 research outputs found

    Generalized in vitro-in vivo relationship (IVIVR) model based on artificial neural networks

    Get PDF
    Background: The aim of this study was to develop a generalized in vitro-in vivo relationship (IVIVR) model based on in vitro dissolution profiles together with quantitative and qualitative composition of dosage formulations as covariates. Such a model would be of substantial aid in the early stages of development of a pharmaceutical formulation, when no in vivo results are yet available and it is impossible to create a classical in vitro-in vivo correlation (IVIVC)/IVIVR. Methods: Chemoinformatics software was used to compute the molecular descriptors of drug substances (ie, active pharmaceutical ingredients) and excipients. The data were collected from the literature. Artificial neural networks were used as the modeling tool. The training process was carried out using the 10-fold cross-validation technique. Results: The database contained 93 formulations with 307 inputs initially, and was later limited to 28 in a course of sensitivity analysis. The four best models were introduced into the artificial neural network ensemble. Complete in vivo profiles were predicted accurately for 37.6% of the formulations. Conclusion: It has been shown that artificial neural networks can be an effective predictive tool for constructing IVIVR in an integrated generalized model for various formulations. Because IVIVC/IVIVR is classically conducted for 2–4 formulations and with a single active pharmaceutical ingredient, the approach described here is unique in that it incorporates various active pharmaceutical ingredients and dosage forms into a single model. Thus, preliminary IVIVC/IVIVR can be available without in vivo data, which is impossible using current IVIVC/IVIVR procedures

    Empirical modeling of the fine particle fraction for carrier-based pulmonary delivery formulations

    Get PDF
    In vitro study of the deposition of drug particles is commonly used during development of formulations for pulmonary delivery. The assay is demanding, complex, and depends on: properties of the drug and carrier particles, including size, surface characteristics, and shape; interactions between the drug and carrier particles and assay conditions, including flow rate, type of inhaler, and impactor. The aerodynamic properties of an aerosol are measured in vitro using impactors and in most cases are presented as the fine particle fraction, which is a mass percentage of drug particles with an aerodynamic diameter below 5 µm. In the present study, a model in the form of a mathematical equation was developed for prediction of the fine particle fraction. The feature selection was performed using the R-environment package “fscaret”. The input vector was reduced from a total of 135 independent variables to 28. During the modeling stage, techniques like artificial neural networks, genetic programming, rule-based systems, and fuzzy logic systems were used. The 10-fold cross-validation technique was used to assess the generalization ability of the models created. The model obtained had good predictive ability, which was confirmed by a root-mean-square error and normalized root-mean-square error of 4.9 and 11%, respectively. Moreover, validation of the model using external experimental data was performed, and resulted in a root-mean-square error and normalized root-mean-square error of 3.8 and 8.6%, respectively.Published versio

    Heuristic modeling of macromolecule release from PLGA microspheres

    Get PDF
    Dissolution of protein macromolecules from poly(lactic-co-glycolic acid) (PLGA) particles is a complex process and still not fully understood. As such, there are difficulties in obtaining a predictive model that could be of fundamental significance in design, development, and optimization for medical applications and toxicity evaluation of PLGA-based multiparticulate dosage form. In the present study, two models with comparable goodness of fit were proposed for the prediction of the macromolecule dissolution profile from PLGA micro- and nanoparticles. In both cases, heuristic techniques, such as artificial neural networks (ANNs), feature selection, and genetic programming were employed. Feature selection provided by fscaret package and sensitivity analysis performed by ANNs reduced the original input vector from a total of 300 input variables to 21, 17, 16, and eleven; to achieve a better insight into generalization error, two cut-off points for every method was proposed. The best ANNs model results were obtained by monotone multi-layer perceptron neural network (MON-MLP) networks with a root-mean-square error (RMSE) of 15.4, and the input vector consisted of eleven inputs. The complicated classical equation derived from a database consisting of 17 inputs was able to yield a better generalization error (RMSE) of 14.3. The equation was characterized by four parameters, thus feasible (applicable) to standard nonlinear regression techniques. Heuristic modeling led to the ANN model describing macromolecules release profiles from PLGA microspheres with good predictive efficiency. Moreover genetic programming technique resulted in classical equation with comparable predictability to the ANN model

    Development of "in vitro-in vivo" correlation/relationship modeling approaches for immediate release formulations using compartmental dynamic dissolution data from "Golem" : a novel apparatus

    Get PDF
    Different batches of atorvastatin, represented by two immediate release formulation designs, were studied using a novel dynamic dissolution apparatus, simulating stomach and small intestine. A universal dissolution method was employed which simulated the physiology of human gastrointestinal tract, including the precise chyme transit behavior and biorelevant conditions. The multicompartmental dissolution data allowed direct observation and qualitative discrimination of the differences resulting from highly pH dependent dissolution behavior of the tested batches. Further evaluation of results was performed using IVIVC/IVIVR development. While satisfactory correlation could not be achieved using a conventional deconvolution based-model, promising results were obtained through the use of a nonconventional approach exploiting the complex compartmental dissolution data

    Transparent computational intelligence models for pharmaceutical tableting process

    Get PDF
    Purpose Pharmaceutical industry is tightly regulated owing to health concerns. Over the years, the use of computational intelligence (CI) tools has increased in pharmaceutical research and development, manufacturing, and quality control. Quality characteristics of tablets like tensile strength are important indicators of expected tablet performance. Predictive, yet transparent, CI models which can be analysed for insights into the formulation and development process. Methods This work uses data from a galenical tableting study and computational intelligence methods like decision trees, random forests, fuzzy systems, artificial neural networks, and symbolic regression to establish models for the outcome of tensile strength. Data was divided in training and test fold according to ten fold cross validation scheme and RMSE was used as an evaluation metric. Tree based ensembles and symbolic regression methods are presented as transparent models with extracted rules and mathematical formula, respectively, explaining the CI models in greater detail. Results CI models for tensile strength of tablets based on the formulation design and process parameters have been established. Best models exhibit normalized RMSE of 7 %. Rules from fuzzy systems and random forests are shown to increase transparency of CI models. A mathematical formula generated by symbolic regression is presented as a transparent model. Conclusions CI models explain the variation of tensile strength according to formulation and manufacturing process characteristics. CI models can be further analyzed to extract actionable knowledge making the artificial learning process more transparent and acceptable for use in pharmaceutical quality and safety domains

    Data-driven modeling of the bicalutamide dissolution from powder systems

    Get PDF
    Low solubility of active pharmaceutical compounds (APIs) remains an important challenge in dosage form development process. In the manuscript, empirical models were developed and analyzed in order to predict dissolution of bicalutamide (BCL) from solid dispersion with various carriers. BCL was chosen as an example of a poor watersoluble API. Two separate datasets were created: one from literature data and another based on in-house experimental data. Computational experiments were conducted using artificial intelligence tools based on machine learning (AI/ML) with a plethora of techniques including artificial neural networks, decision trees, rule-based systems, and evolutionary computations. The latter resulting in classical mathematical equations provided models characterized by the lowest prediction error. In-house data turned out to be more homogeneous, as well as formulations were more extensively characterized than literature-based data. Thus, in-house data resulted in better models than literature-based data set. Among the other covariates, the best model uses for prediction of BCL dissolution profile the transmittance from IR spectrum at 1260 cm−1 wavenumber. Ab initio modeling–based in silico simulations were conducted to reveal potential BCL–excipients interaction. All crucial variables were selected automatically by AI/ML tools and resulted in reasonably simple and yet predictive models suitable for application in Quality by Design (QbD) approaches. Presented data-driven model development using AI/ML could be useful in various problems in the field of pharmaceutical technology, resulting in both predictive and investigational tools revealing new knowledge

    Effect of roll compaction on granule size distribution of microcrystalline cellulose-mannitol mixtures : computational intelligence modeling and parametric analysis

    Get PDF
    Dry granulation using roll compaction is a typical unit operation for producing solid dosage forms in the pharmaceutical industry. Dry granulation is commonly used if the powder mixture is sensitive to heat and moisture and has poor flow properties. The output of roll compaction is compacted ribbons that exhibit different properties based on the adjusted process parameters. These ribbons are then milled into granules and finally compressed into tablets. The properties of the ribbons directly affect the granule size distribution (GSD) and the quality of final products; thus, it is imperative to study the effect of roll compaction process parameters on GSD. The understanding of how the roll compactor process parameters and material properties interact with each other will allow accurate control of the process, leading to the implementation of quality by design practices. Computational intelligence (CI) methods have a great potential for being used within the scope of quality by design approach. The main objective of this study was to show how the computational intelligence techniques can be useful to predict the GSD by using different process conditions of roll compaction and material properties. Different techniques such as multiple linear regression, artificial neural networks, random forest, Cubist and k-nearest neighbors algorithm assisted by sevenfold cross-validation were used to present generalized models for the prediction of GSD based on roll compaction process setting and material properties. The normalized root-mean-squared error and the coefficient of determination (R2) were used for model assessment. The best fit was obtained by Cubist model (normalized root-mean-squared error =3.22%, R2=0.95). Based on the results, it was confirmed that the material properties (true density) followed by compaction force have the most significant effect on GSD
    corecore