2,792 research outputs found

    Freeze-drying modeling and monitoring using a new neuro-evolutive technique

    Get PDF
    This paper is focused on the design of a black-box model for the process of freeze-drying of pharmaceuticals. A new methodology based on a self-adaptive differential evolution scheme is combined with a back-propagation algorithm, as local search method, for the simultaneous structural and parametric optimization of the model represented by a neural network. Using the model of the freeze-drying process, both the temperature and the residual ice content in the product vs. time can be determine off-line, given the values of the operating conditions (the temperature of the heating shelf and the pressure in the drying chamber). This makes possible to understand if the maximum temperature allowed by the product is trespassed and when the sublimation drying is complete, thus providing a valuable tool for recipe design and optimization. Besides, the black box model can be applied to monitor the freeze-drying process: in this case, the measurement of product temperature is used as input variable of the neural network in order to provide in-line estimation of the state of the product (temperature and residual amount of ice). Various examples are presented and discussed, thus pointing out the strength of the too

    Modelling granules size distribution produced on a continuous manufacturating line with non-linear autoregressive artificial neural networks

    Get PDF
    Tese de mestrado, Engenharia Farmacêutica, Universidade de Lisboa, Faculdade de Farmácia, 2018Particle size is a critical quality parameter in several pharmaceutical unit operations. An adequate particle size distribution is essential to ensure optimal manufacturability which, in turn, has an important impact on the safety, efficacy and quality of the end product. Thus, the monitoring and control of the particle size via in-process size measurements is crucial to the pharmaceutical industry. Currently, a wide range of techniques are available for the determination of particle size distribution, however a technique that enables relevant real-time process data is highly preferable, as a better understanding and control over the process is offered. The pharmaceutical industry follows the “technology-push model” as it depends on scientific and technological advances. Hence, optimization of product monitoring technologies for drug products have been receiving more attention as it helps to increase profitability. An increasing interest in the usage of virtual instruments as an alternative to physical instruments has arisen in recent years. A software sensor utilizes information collected from a process operation to estimate values of some property of interest, typically difficult to measure experimentally. One of the most significant benefits of the computational approach is the possibility to adapt the measuring system through several optimization solutions. The present thesis focuses on the development of a mathematical dynamic model capable of predicting particle size distribution in-real time. For this purpose, multivariate data coming from univariate sensors placed in multiple locations of the continuous production line, ConsiGmaTM-25, was utilized to determine the size distribution (d50) of granules evaluated at a specific site within the line. The ConsiGmaTM-25 system is a continuous granulation line developed by GEA Pharma. It consists of three modules: a continuous twin-screw granulation module, a six-segmented cell fluid bed dryer and a product control unit. In the continuous granulation module, granules are produced inside the twin-screw granulator via mixing of the powder and the granulation liquid (water) fed into the granulation barrel. Once finalized the granulation operation, the produced granules are then pneumatically transferred to the fluid bed dryer module. In the dryer module, the granules are relocated to one specific dryer cell, where drying is performed for a pre-defined period of time. The dry granules are formerly transported to the product control hopper with an integrated mill situated in the product control unit. The granules are milled, and the resulting product is gravitationally discharged and can undergo further processing steps, such as blending, tableting and coating. The size distribution (d50) of the granules to be determined in this work were assessed inside dryer cell no.4, located at the dryer module. The size distribution was measured every ten seconds by a focused beam reflectance measurement technique. A non-linear autoregressive with exogenous inputs network was developed to achieve accurate predictions of granules size distribution values. The development of the predictive model consisted of the implementation of an optimization strategy in terms of topology, inputs, delays and training methodology. The network was trained against the d50 obtained from particle size distribution collected in-situ by the focused beam reflectance measurement technique mentioned above. The model presented the ability to predict the d50 value from the beginning to the end of the several drying cycles. The accuracy of the artificial neural network was determined by a root mean squared error of prediction of 6.9%, which demonstrated the capability to produce close results to the experimental data of the cycles/runs included on the testing set. The predictive ability of the neural network, however, could not be extended to drying cycle that presented irregular fluctuations. Due to the importance of the precise monitoring of the size distribution within pharmaceutical operations, a future adjustment of the optimization strategy is of great interest. In the future, a higher number of experimental runs/cycles can be used during the training process to enable the network to identify and predict more easily atypical cases. In addition, a more realistic optimization strategy could be performed for all process parameters in simultaneous through the implementation of a genetic algorithm, for example. Changes in terms of network topology can also be considered.O tamanho de partícula é um parâmetro crítico de qualidade em diversas operações unitárias da indústria farmacêutica. Uma distribuição de tamanho de partícula adequada é essencial para garantir condições ideais de fabrico, o que por sua vez, possui um impacto significativo na segurança, eficácia e qualidade do produto final. Deste modo, a monitorização e controlo do tamanho de partícula através de medições efetuadas durante o processo são consideradas cruciais para a indústria. Atualmente, uma ampla gama de técnicas encontra-se disponível para a determinação da distribuição de tamanho de partícula. Contudo, uma técnica que permita a obtenção de dados relevantes em tempo real é altamente preferível, visto que um melhor entendimento e controlo sobre o processo é obtido. A indústria farmacêutica encontra-se altamente dependente de avanços científicos e tecnológicos. Nos últimos anos, um interesse crescente no uso de instrumentos virtuais como alternativa à instrumentalização física na monitorização de produto é evidente. Um sensor virtual faz uso da informação contida num determinado conjunto de dados para efetuar medições adequadas de uma propriedade de interesse. Uma das vantagens mais importantes desta abordagem computacional corresponde à possibilidade de adaptação do sistema de medição, recorrendo a variados métodos de otimização. A presente tese encontra-se focada no desenvolvimento de um modelo matemático dinâmico capaz de prever a distribuição de tamanho de partícula em tempo real. Para o efeito, dados multivariados gerados, a cada segundo, por sensores localizados em múltiplos locais da linha de produção contínua, ConsiGmaTM-25, foram utilizados para determinar a distribuição de tamanho (d50) de grânulos avaliada num ponto específico da linha. O sistema ConsiGmaTM-25 trata-se de uma linha contínua de produção de grânulos, que pode ser dividida, essencialmente, em três módulos principais: granulador contínuo, secador de leito fluido e unidade de acondicionamento de produto. No módulo de granulação, ocorre a produção de grânulos através da mistura de pó e água (líquido de granulação). Uma vez finalizada a operação unitária, os grânulos produzidos são pneumaticamente transferidos para o secador de leito fluido. Neste local, os grânulos são introduzidos numa das seis células de secagem, onde ocorre o processo de secagem durante um período de tempo pré-definido. Os grânulos secos resultantes são, de seguida, transferidos para a unidade de acondicionamento de produto, integrado por um moinho, responsável pela operação de moagem. O material moído é gravitacionalmente descarregado e pode ser novamente processado através de operações como a mistura, compressão ou revestimento. A distribuição de tamanho (d50) dos grânulos a ser determinada neste trabalho foi medida, a cada dez segundos, através da técnica de reflectância por um feixe de luz focalizado. Um total de dezasseis corridas realizadas no mês de agosto foram utilizadas neste trabalho. Para cada corrida, dados relativos a parâmetros de processo tais como, pressões, temperaturas, fluxos de ar, entre outros, bem como, a distribuição do tamanho (d50) dos grânulos foram disponibilizados. Com base na discrepância temporal verificada entre os dados de processo e os valores de distribuição de tamanho (d50) dos grânulos, diversas etapas de processamento foi executadas. O processamento de dados foi realizado, essencialmente, em três fases distintas: alinhamento, filtragem e organização/fragmentação. Uma vez finalizado o processamento, os dados foram utilizados no desenvolvimento do modelo preditivo (rede neural). Uma rede neuronal não-linear autorregressiva com três entradas exógenas foi desenvolvida para realizar previsões da distribuição de tamanho (d50) dos grânulos. O desenvolvimento do modelo preditivo consistiu na implementação de uma estratégia de otimização em termos de topologia, atrasos, dados de entrada, seleção de corridas e metodologia de treino. Para cada variável de processo (entrada), um atraso foi assinalado com base em pressupostos fundamentados por estudos relativos ao tempo de residência dos três módulos da linha contínua. Os dados de entrada foram definidos com base no resultado de um modelo matemático desenvolvido para designar o conjunto de variáveis para o qual se observava um menor erro médio quadrático de previsão da propriedade de interesse, d50. De forma a possibilitar o treino da rede, os dados fragmentados foram divididos em dois principais conjuntos: treino e teste. A rede foi treinada e validada com dados de treino, sendo os dados de teste seguidamente utilizados para avaliar a capacidade preditiva do modelo otimizado. O modelo apresentou a capacidade de prever o valor de d50 ao longo dos vários ciclos de secagem. A precisão da rede neural foi determinada por um valor de erro médio quadrático de previsão de 6,9%, demonstrando sua capacidade de produzir resultados próximos aos dados experimentais incluídos no conjunto de teste. A capacidade preditiva da rede neural, no entanto, não foi capaz de abranger casos atípicos. Considerando a importância de uma monitorização precisa da distribuição de tamanho nas operações farmacêuticas, uma futura alteração na estratégia de otimização implementada é altamente aconselhável. No futuro, o uso de um número mais elevado de ciclos/corridas de secagem durante o processo de treino da rede poderá permitir que esta seja capaz de identificar e prever com maior facilidade casos atípicos. Adicionalmente, uma abordagem mais realista da estratégia de otimização poderá ser executada para todas os parâmetros de processo em simultâneo através da implementação de um algoritmo genético. Ainda, alterações na topologia da rede poderão ser também consideradas

    Using recurrent neural networks to predict the time for an event

    Get PDF
    Treballs finals del Màster de Fonaments de Ciència de Dades, Facultat de matemàtiques, Universitat de Barcelona, Any: 2018, Tutor: Jordi Vitrià i Marca[en] One of the main concerns of the manufacturing industry is the constant threat of unplanned stops. Even if the maintenance guidelines are followed for all the components of the line, these downtimes are common and they affect the productivity. Most of what is done nowadays in the manufacturing plants involves classic statistics, and sometimes online monitoring. However, in most of the industries the data related to the process is monitored and saved for regulatory purposes. Unfortunately it’s barely used, while the actual technologies offer a wide horizon of possibilities. The time to an event is a primary outcome of interest in many fields e.g., medical research, customer churn, etc. And we think that it’s also very interesting for Predictive Maintenance. The time to an event (or in this context time to failure) is typically positively skewed, subject to censoring, and explained by time varying variables. Therefore conventional statistic learning techniques such as linear regression or random forests don’t apply. Instead we have to relate on more complex methods. In particular we focus on the WTTE-RNN framework proposed by Egil Martinsson, which employs Recurrent Neural Networks to predict the parameters of a Weibull Distribution. The result is a flexible and powerful model specially suited for timedistributed data that can be organized in batches

    Quality-by-design in pharmaceutical development: From current perspectives to practical applications

    Get PDF
    Current pharmaceutical research directions tend to follow a systematic approach in the field of applied research and development. The concept of quality-by-design (QbD) has been the focus of the current progress of pharmaceutical sciences. It is based on, but not limited, to risk assessment, design of experiments and other computational methods and process analytical technology. These tools offer a well-organized methodology, both to identify and analyse the hazards that should be handled as critical, and are therefore applicable in the control strategy. Once implemented, the QbD approach will augment the comprehension of experts concerning the developed analytical technique or manufacturing process. The main activities are oriented towards the identification of the quality target product profiles, along with the critical quality attributes, the risk management of these and their analysis through in silico aided methods. This review aims to offer an overview of the current standpoints and general applications of QbD methods in pharmaceutical development

    Machine learning in bioprocess development: From promise to practice

    Get PDF
    Fostered by novel analytical techniques, digitalization and automation, modern bioprocess development provides high amounts of heterogeneous experimental data, containing valuable process information. In this context, data-driven methods like machine learning (ML) approaches have a high potential to rationally explore large design spaces while exploiting experimental facilities most efficiently. The aim of this review is to demonstrate how ML methods have been applied so far in bioprocess development, especially in strain engineering and selection, bioprocess optimization, scale-up, monitoring and control of bioprocesses. For each topic, we will highlight successful application cases, current challenges and point out domains that can potentially benefit from technology transfer and further progress in the field of ML

    Machine learning to empower electrohydrodynamic processing

    Get PDF
    Electrohydrodynamic (EHD) processes are promising healthcare fabrication technologies, as evidenced by the number of commercialised and food-and-drug administration (FDA)-approved products produced by these processes. Their ability to produce both rapidly and precisely nano-sized products provides them with a unique set of qualities that cannot be matched by other fabrication technologies. Consequently, this has stimulated the development of EHD processing to tackle other healthcare challenges. However, as with most technologies, time and resources will be needed to realise fully the potential EHD processes can offer. To address this bottleneck, researchers are adopting machine learning (ML), a subset of artificial intelligence, into their workflow. ML has already made ground-breaking advancements in the healthcare sector, and it is anticipated to do the same in the materials domain. Presently, the application of ML in fabrication technologies lags behind other sectors. To that end, this review showcases the progress made by ML for EHD workflows, demonstrating how the latter can benefit greatly from the former. In addition, we provide an introduction to the ML pipeline, to help encourage the use of ML for other EHD researchers. As discussed, the merger of ML with EHD has the potential to expedite novel discoveries and to automate the EHD workflow

    A Machine Learning Approach for Predicting Clinical Trial Patient Enrollment in Drug Development Portfolio Demand Planning

    Get PDF
    One of the biggest challenges the clinical research industry currently faces is the accurate forecasting of patient enrollment (namely if and when a clinical trial will achieve full enrollment), as the stochastic behavior of enrollment can significantly contribute to delays in the development of new drugs, increases in duration and costs of clinical trials, and the over- or under- estimation of clinical supply. This study proposes a Machine Learning model using a Fully Convolutional Network (FCN) that is trained on a dataset of 100,000 patient enrollment data points including patient age, patient gender, patient disease, investigational product, study phase, blinded vs. unblinded, sponsor CRO selection, enrollment quarter, and enrollment country values to predict patient enrollment characteristics in clinical trials. The model was tested using a dataset consisting of 5,000 data points and yielded a high level of accuracy. This development in patient enrollment prediction will optimize portfolio demand planning and help avoid costs associated with inaccurate patient enrollment forecasting

    Artificial Intelligence for In Silico Clinical Trials: A Review

    Full text link
    A clinical trial is an essential step in drug development, which is often costly and time-consuming. In silico trials are clinical trials conducted digitally through simulation and modeling as an alternative to traditional clinical trials. AI-enabled in silico trials can increase the case group size by creating virtual cohorts as controls. In addition, it also enables automation and optimization of trial design and predicts the trial success rate. This article systematically reviews papers under three main topics: clinical simulation, individualized predictive modeling, and computer-aided trial design. We focus on how machine learning (ML) may be applied in these applications. In particular, we present the machine learning problem formulation and available data sources for each task. We end with discussing the challenges and opportunities of AI for in silico trials in real-world applications

    Fast and versatile chromatography process design and operation optimization with the aid of artificial intelligence

    Get PDF
    Preparative and process chromatography is a versatile unit operation for the capture, purification, and polishing of a broad variety of molecules, especially very similar and complex compounds such as sugars, isomers, enantiomers, diastereomers, plant extracts, and metal ions such as rare earth elements. Another steadily growing field of application is biochromatography, with a diversity of complex compounds such as peptides, proteins, mAbs, fragments, VLPs, and even mRNA vaccines. Aside from molecular diversity, separation mechanisms range from selective affinity ligands, hydrophobic interaction, ion exchange, and mixed modes. Biochromatography is utilized on a scale of a few kilograms to 100,000 tons annually at about 20 to 250 cm in column diameter. Hence, a versatile and fast tool is needed for process design as well as operation optimization and process control. Existing process modeling approaches have the obstacle of sophisticated laboratory scale experimental setups for model parameter determination and model validation. For a broader application in daily project work, the approach has to be faster and require less effort for non-chromatography experts. Through the extensive advances in the field of artificial intelligence, new methods have emerged to address this need. This paper proposes an artificial neural network-based approach which enables the identification of competitive Langmuir-isotherm parameters of arbitrary three-component mixtures on a previously specified column. This is realized by training an ANN with simulated chromatograms varying in isotherm parameters. In contrast to traditional parameter estimation techniques, the estimation time is reduced to milliseconds, and the need for expert or prior knowledge to obtain feasible estimates is reduced
    corecore