2,904 research outputs found

    Application of machine-learning algorithms for better understanding of tableting properties of lactose co-processed with lipid excipients

    Get PDF
    Co-processing (CP) provides superior properties to excipients and has become a reliable option to facilitated formulation and manufacturing of variety of solid dosage forms. Development of directly compressible formulations with high doses of poorly flowing/compressible active pharmaceutical ingredients, such as paracetamol, remains a great challenge for the pharmaceutical industry due to the lack of understanding of the interplay between the formulation properties, process of compaction, and stages of tablets’ detachment and ejection. The aim of this study was to analyze the influence of the compression load, excipients’ co-processing and the addition of paracetamol on the obtained tablets’ tensile strength and the specific parameters of the tableting process, such as (net) compression work, elastic recovery, detachment, and ejection work, as well as the ejection force. Two types of neural networks were used to analyze the data: classification (Kohonen network) and regression networks (multilayer perceptron and radial basis function), to build prediction models and identify the variables that are predominantly affecting the tableting process and the obtained tablets’ tensile strength. It has been demonstrated that sophisticated data-mining methods are necessary to interpret complex phenomena regarding the effect of co-processing on tableting properties of directly compressible excipients

    Quality-by-design in pharmaceutical development: From current perspectives to practical applications

    Get PDF
    Current pharmaceutical research directions tend to follow a systematic approach in the field of applied research and development. The concept of quality-by-design (QbD) has been the focus of the current progress of pharmaceutical sciences. It is based on, but not limited, to risk assessment, design of experiments and other computational methods and process analytical technology. These tools offer a well-organized methodology, both to identify and analyse the hazards that should be handled as critical, and are therefore applicable in the control strategy. Once implemented, the QbD approach will augment the comprehension of experts concerning the developed analytical technique or manufacturing process. The main activities are oriented towards the identification of the quality target product profiles, along with the critical quality attributes, the risk management of these and their analysis through in silico aided methods. This review aims to offer an overview of the current standpoints and general applications of QbD methods in pharmaceutical development

    Modelling granules size distribution produced on a continuous manufacturating line with non-linear autoregressive artificial neural networks

    Get PDF
    Tese de mestrado, Engenharia Farmacêutica, Universidade de Lisboa, Faculdade de Farmácia, 2018Particle size is a critical quality parameter in several pharmaceutical unit operations. An adequate particle size distribution is essential to ensure optimal manufacturability which, in turn, has an important impact on the safety, efficacy and quality of the end product. Thus, the monitoring and control of the particle size via in-process size measurements is crucial to the pharmaceutical industry. Currently, a wide range of techniques are available for the determination of particle size distribution, however a technique that enables relevant real-time process data is highly preferable, as a better understanding and control over the process is offered. The pharmaceutical industry follows the “technology-push model” as it depends on scientific and technological advances. Hence, optimization of product monitoring technologies for drug products have been receiving more attention as it helps to increase profitability. An increasing interest in the usage of virtual instruments as an alternative to physical instruments has arisen in recent years. A software sensor utilizes information collected from a process operation to estimate values of some property of interest, typically difficult to measure experimentally. One of the most significant benefits of the computational approach is the possibility to adapt the measuring system through several optimization solutions. The present thesis focuses on the development of a mathematical dynamic model capable of predicting particle size distribution in-real time. For this purpose, multivariate data coming from univariate sensors placed in multiple locations of the continuous production line, ConsiGmaTM-25, was utilized to determine the size distribution (d50) of granules evaluated at a specific site within the line. The ConsiGmaTM-25 system is a continuous granulation line developed by GEA Pharma. It consists of three modules: a continuous twin-screw granulation module, a six-segmented cell fluid bed dryer and a product control unit. In the continuous granulation module, granules are produced inside the twin-screw granulator via mixing of the powder and the granulation liquid (water) fed into the granulation barrel. Once finalized the granulation operation, the produced granules are then pneumatically transferred to the fluid bed dryer module. In the dryer module, the granules are relocated to one specific dryer cell, where drying is performed for a pre-defined period of time. The dry granules are formerly transported to the product control hopper with an integrated mill situated in the product control unit. The granules are milled, and the resulting product is gravitationally discharged and can undergo further processing steps, such as blending, tableting and coating. The size distribution (d50) of the granules to be determined in this work were assessed inside dryer cell no.4, located at the dryer module. The size distribution was measured every ten seconds by a focused beam reflectance measurement technique. A non-linear autoregressive with exogenous inputs network was developed to achieve accurate predictions of granules size distribution values. The development of the predictive model consisted of the implementation of an optimization strategy in terms of topology, inputs, delays and training methodology. The network was trained against the d50 obtained from particle size distribution collected in-situ by the focused beam reflectance measurement technique mentioned above. The model presented the ability to predict the d50 value from the beginning to the end of the several drying cycles. The accuracy of the artificial neural network was determined by a root mean squared error of prediction of 6.9%, which demonstrated the capability to produce close results to the experimental data of the cycles/runs included on the testing set. The predictive ability of the neural network, however, could not be extended to drying cycle that presented irregular fluctuations. Due to the importance of the precise monitoring of the size distribution within pharmaceutical operations, a future adjustment of the optimization strategy is of great interest. In the future, a higher number of experimental runs/cycles can be used during the training process to enable the network to identify and predict more easily atypical cases. In addition, a more realistic optimization strategy could be performed for all process parameters in simultaneous through the implementation of a genetic algorithm, for example. Changes in terms of network topology can also be considered.O tamanho de partícula é um parâmetro crítico de qualidade em diversas operações unitárias da indústria farmacêutica. Uma distribuição de tamanho de partícula adequada é essencial para garantir condições ideais de fabrico, o que por sua vez, possui um impacto significativo na segurança, eficácia e qualidade do produto final. Deste modo, a monitorização e controlo do tamanho de partícula através de medições efetuadas durante o processo são consideradas cruciais para a indústria. Atualmente, uma ampla gama de técnicas encontra-se disponível para a determinação da distribuição de tamanho de partícula. Contudo, uma técnica que permita a obtenção de dados relevantes em tempo real é altamente preferível, visto que um melhor entendimento e controlo sobre o processo é obtido. A indústria farmacêutica encontra-se altamente dependente de avanços científicos e tecnológicos. Nos últimos anos, um interesse crescente no uso de instrumentos virtuais como alternativa à instrumentalização física na monitorização de produto é evidente. Um sensor virtual faz uso da informação contida num determinado conjunto de dados para efetuar medições adequadas de uma propriedade de interesse. Uma das vantagens mais importantes desta abordagem computacional corresponde à possibilidade de adaptação do sistema de medição, recorrendo a variados métodos de otimização. A presente tese encontra-se focada no desenvolvimento de um modelo matemático dinâmico capaz de prever a distribuição de tamanho de partícula em tempo real. Para o efeito, dados multivariados gerados, a cada segundo, por sensores localizados em múltiplos locais da linha de produção contínua, ConsiGmaTM-25, foram utilizados para determinar a distribuição de tamanho (d50) de grânulos avaliada num ponto específico da linha. O sistema ConsiGmaTM-25 trata-se de uma linha contínua de produção de grânulos, que pode ser dividida, essencialmente, em três módulos principais: granulador contínuo, secador de leito fluido e unidade de acondicionamento de produto. No módulo de granulação, ocorre a produção de grânulos através da mistura de pó e água (líquido de granulação). Uma vez finalizada a operação unitária, os grânulos produzidos são pneumaticamente transferidos para o secador de leito fluido. Neste local, os grânulos são introduzidos numa das seis células de secagem, onde ocorre o processo de secagem durante um período de tempo pré-definido. Os grânulos secos resultantes são, de seguida, transferidos para a unidade de acondicionamento de produto, integrado por um moinho, responsável pela operação de moagem. O material moído é gravitacionalmente descarregado e pode ser novamente processado através de operações como a mistura, compressão ou revestimento. A distribuição de tamanho (d50) dos grânulos a ser determinada neste trabalho foi medida, a cada dez segundos, através da técnica de reflectância por um feixe de luz focalizado. Um total de dezasseis corridas realizadas no mês de agosto foram utilizadas neste trabalho. Para cada corrida, dados relativos a parâmetros de processo tais como, pressões, temperaturas, fluxos de ar, entre outros, bem como, a distribuição do tamanho (d50) dos grânulos foram disponibilizados. Com base na discrepância temporal verificada entre os dados de processo e os valores de distribuição de tamanho (d50) dos grânulos, diversas etapas de processamento foi executadas. O processamento de dados foi realizado, essencialmente, em três fases distintas: alinhamento, filtragem e organização/fragmentação. Uma vez finalizado o processamento, os dados foram utilizados no desenvolvimento do modelo preditivo (rede neural). Uma rede neuronal não-linear autorregressiva com três entradas exógenas foi desenvolvida para realizar previsões da distribuição de tamanho (d50) dos grânulos. O desenvolvimento do modelo preditivo consistiu na implementação de uma estratégia de otimização em termos de topologia, atrasos, dados de entrada, seleção de corridas e metodologia de treino. Para cada variável de processo (entrada), um atraso foi assinalado com base em pressupostos fundamentados por estudos relativos ao tempo de residência dos três módulos da linha contínua. Os dados de entrada foram definidos com base no resultado de um modelo matemático desenvolvido para designar o conjunto de variáveis para o qual se observava um menor erro médio quadrático de previsão da propriedade de interesse, d50. De forma a possibilitar o treino da rede, os dados fragmentados foram divididos em dois principais conjuntos: treino e teste. A rede foi treinada e validada com dados de treino, sendo os dados de teste seguidamente utilizados para avaliar a capacidade preditiva do modelo otimizado. O modelo apresentou a capacidade de prever o valor de d50 ao longo dos vários ciclos de secagem. A precisão da rede neural foi determinada por um valor de erro médio quadrático de previsão de 6,9%, demonstrando sua capacidade de produzir resultados próximos aos dados experimentais incluídos no conjunto de teste. A capacidade preditiva da rede neural, no entanto, não foi capaz de abranger casos atípicos. Considerando a importância de uma monitorização precisa da distribuição de tamanho nas operações farmacêuticas, uma futura alteração na estratégia de otimização implementada é altamente aconselhável. No futuro, o uso de um número mais elevado de ciclos/corridas de secagem durante o processo de treino da rede poderá permitir que esta seja capaz de identificar e prever com maior facilidade casos atípicos. Adicionalmente, uma abordagem mais realista da estratégia de otimização poderá ser executada para todas os parâmetros de processo em simultâneo através da implementação de um algoritmo genético. Ainda, alterações na topologia da rede poderão ser também consideradas

    Imparting 3D representations to artificial intelligence for a full assessment of pressure injuries.

    Get PDF
    During recent decades, researches have shown great interest to machine learning techniques in order to extract meaningful information from the large amount of data being collected each day. Especially in the medical field, images play a significant role in the detection of several health issues. Hence, medical image analysis remarkably participates in the diagnosis process and it is considered a suitable environment to interact with the technology of intelligent systems. Deep Learning (DL) has recently captured the interest of researchers as it has proven to be efficient in detecting underlying features in the data and outperformed the classical machine learning methods. The main objective of this dissertation is to prove the efficiency of Deep Learning techniques in tackling one of the important health issues we are facing in our society, through medical imaging. Pressure injuries are a dermatology related health issue associated with increased morbidity and health care costs. Managing pressure injuries appropriately is increasingly important for all the professionals in wound care. Using 2D photographs and 3D meshes of these wounds, collected from collaborating hospitals, our mission is to create intelligent systems for a full non-intrusive assessment of these wounds. Five main tasks have been achieved in this study: a literature review of wound imaging methods using machine learning techniques, the classification and segmentation of the tissue types inside the pressure injury, the segmentation of these wounds and the design of an end-to-end system which measures all the necessary quantitative information from 3D meshes for an efficient assessment of PIs, and the integration of the assessment imaging techniques in a web-based application

    The News Delivery Channel Recommendation Based on Granular Neural Network

    Full text link
    With the continuous maturation and expansion of neural network technology, deep neural networks have been widely utilized as the fundamental building blocks of deep learning in a variety of applications, including speech recognition, machine translation, image processing, and the creation of recommendation systems. Therefore, many real-world complex problems can be solved by the deep learning techniques. As is known, traditional news recommendation systems mostly employ techniques based on collaborative filtering and deep learning, but the performance of these algorithms is constrained by the sparsity of the data and the scalability of the approaches. In this paper, we propose a recommendation model using granular neural network model to recommend news to appropriate channels by analyzing the properties of news. Specifically, a specified neural network serves as the foundation for the granular neural network that the model is considered to be build. Different information granularities are attributed to various types of news material, and different information granularities are released between networks in various ways. When processing data, granular output is created, which is compared to the interval values pre-set on various platforms and used to quantify the analysis's effectiveness. The analysis results could help the media to match the proper news in depth, maximize the public attention of the news and the utilization of media resources
    corecore