50 research outputs found

    Machine learning models for the prediction of pharmaceutical powder properties

    Get PDF
    Error on title page – year of award is 2023.Understanding how particle attributes affect the pharmaceutical manufacturing process performance remains a significant challenge for the industry, adding cost and time to the development of robust products and production routes. Tablet formation can be achieved by several techniques however, direct compression (DC) and granulation are the most widely used in industrial operations. DC is of particular interest as it offers lower-cost manufacturing and a streamlined process with fewer steps compared with other unit operations. However, to achieve the full potential benefits of DC for tablet manufacture, this places strict demands on material flow properties, blend uniformity, compactability, and lubrication, which need to be satisfied. DC is increasingly the preferred technique for pharmaceutical companies for oral solid dose manufacture, consequently making the flow prediction of pharmaceutical materials of increasing importance. Bulk properties are influenced by particle attributes, such as particle size and shape, which are defined during crystallization and/or milling processes. Currently, the suitability of raw materials and/or formulated blends for DC requires detailed characterization of the bulk properties. A key goal of digital design and Industry 4.0 concepts is through digital transformation of existing development steps be able to better predict properties whilst minimizing the amount of material and resources required to inform process selection during early- stage development. The work presented in Chapter 4 focuses on developing machine learning (ML) models to predict powder flow behaviour of routine, widely available pharmaceutical materials. Several datasets comprising powder attributes (particle size, shape, surface area, surface energy, and bulk density) and flow properties (flow function coefficient) have been built, for pure compounds, binary mixtures, and multicomponent formulations. Using these datasets, different ML models, including traditional ML (random forest, support vector machines, k nearest neighbour, gradient boosting, AdaBoost, Naïve Bayes, and logistic regression) classification and regression approaches, have been explored for the prediction of flow properties, via flow function coefficient. The models have been evaluated using multiple sampling methods and validated using external datasets, showing a performance over 80%, which is sufficiently high for their implementation to improve manufacturing efficiency. Finally, interpretability methods, namely SHAP (SHapley Additive exPlanaitions), have been used to understand the predictions of the machine learning models by determining how much each variable included in the training dataset has contributed to each final prediction. Chapter 5 expanded on the work presented in Chapter 4 by demonstrating the applicability of ML models for the classification of the viability of pharmaceutical formulations for continuous DC via flow function coefficient on their powder flow. More than 100 formulations were included in this model and the particle size and particle shape of the active pharmaceutical ingredients (APIs), the flow function coefficient of the APIs, and the concentration of the components of the formulations were used to build the training dataset. The ML models were evaluated using different sampling techniques, such as bootstrap sampling and 10-fold cross-validation, achieving a precision of 90%. Furthermore, Chapter 6 presents the comparison of two data-driven model approaches to predict powder flow: a Random Forest (RF) model and a Convolutional Neural Network (CNN) model. A total of 98 powders covering a wide range of particle sizes and shapes were assessed using static image analysis. The RF model was trained on the tabular data (particle size, aspect ratio, and circularity descriptors), and the CNN model was trained on the composite images. Both datasets were extracted from the same characterisation instrument. The data were split into training, testing, and validation sets. The results of the validation were used to compare the performance of the two approaches. The results revealed that both algorithms achieved a similar performance since the RF model and the CNN model achieved the same accuracy of 55%. Finally, other particle and bulk properties, i.e., bulk density, surface area, and surface energy, and their impact on the manufacturability and bioavailability of the drug product are explored in Chapter 7. The bulk density models achieved a high performance of 82%, the surface area models achieved a performance of 80%, and finally, the surface-energy models achieved a performance of 60%. The results of the models presented in this chapter pave the way to unified guidelines moving towards end-to-end continuous manufacturing by linking the manufacturability requirements and the bioavailability requirements.Understanding how particle attributes affect the pharmaceutical manufacturing process performance remains a significant challenge for the industry, adding cost and time to the development of robust products and production routes. Tablet formation can be achieved by several techniques however, direct compression (DC) and granulation are the most widely used in industrial operations. DC is of particular interest as it offers lower-cost manufacturing and a streamlined process with fewer steps compared with other unit operations. However, to achieve the full potential benefits of DC for tablet manufacture, this places strict demands on material flow properties, blend uniformity, compactability, and lubrication, which need to be satisfied. DC is increasingly the preferred technique for pharmaceutical companies for oral solid dose manufacture, consequently making the flow prediction of pharmaceutical materials of increasing importance. Bulk properties are influenced by particle attributes, such as particle size and shape, which are defined during crystallization and/or milling processes. Currently, the suitability of raw materials and/or formulated blends for DC requires detailed characterization of the bulk properties. A key goal of digital design and Industry 4.0 concepts is through digital transformation of existing development steps be able to better predict properties whilst minimizing the amount of material and resources required to inform process selection during early- stage development. The work presented in Chapter 4 focuses on developing machine learning (ML) models to predict powder flow behaviour of routine, widely available pharmaceutical materials. Several datasets comprising powder attributes (particle size, shape, surface area, surface energy, and bulk density) and flow properties (flow function coefficient) have been built, for pure compounds, binary mixtures, and multicomponent formulations. Using these datasets, different ML models, including traditional ML (random forest, support vector machines, k nearest neighbour, gradient boosting, AdaBoost, Naïve Bayes, and logistic regression) classification and regression approaches, have been explored for the prediction of flow properties, via flow function coefficient. The models have been evaluated using multiple sampling methods and validated using external datasets, showing a performance over 80%, which is sufficiently high for their implementation to improve manufacturing efficiency. Finally, interpretability methods, namely SHAP (SHapley Additive exPlanaitions), have been used to understand the predictions of the machine learning models by determining how much each variable included in the training dataset has contributed to each final prediction. Chapter 5 expanded on the work presented in Chapter 4 by demonstrating the applicability of ML models for the classification of the viability of pharmaceutical formulations for continuous DC via flow function coefficient on their powder flow. More than 100 formulations were included in this model and the particle size and particle shape of the active pharmaceutical ingredients (APIs), the flow function coefficient of the APIs, and the concentration of the components of the formulations were used to build the training dataset. The ML models were evaluated using different sampling techniques, such as bootstrap sampling and 10-fold cross-validation, achieving a precision of 90%. Furthermore, Chapter 6 presents the comparison of two data-driven model approaches to predict powder flow: a Random Forest (RF) model and a Convolutional Neural Network (CNN) model. A total of 98 powders covering a wide range of particle sizes and shapes were assessed using static image analysis. The RF model was trained on the tabular data (particle size, aspect ratio, and circularity descriptors), and the CNN model was trained on the composite images. Both datasets were extracted from the same characterisation instrument. The data were split into training, testing, and validation sets. The results of the validation were used to compare the performance of the two approaches. The results revealed that both algorithms achieved a similar performance since the RF model and the CNN model achieved the same accuracy of 55%. Finally, other particle and bulk properties, i.e., bulk density, surface area, and surface energy, and their impact on the manufacturability and bioavailability of the drug product are explored in Chapter 7. The bulk density models achieved a high performance of 82%, the surface area models achieved a performance of 80%, and finally, the surface-energy models achieved a performance of 60%. The results of the models presented in this chapter pave the way to unified guidelines moving towards end-to-end continuous manufacturing by linking the manufacturability requirements and the bioavailability requirements

    Federated Domain Generalization: A Survey

    Full text link
    Machine learning typically relies on the assumption that training and testing distributions are identical and that data is centrally stored for training and testing. However, in real-world scenarios, distributions may differ significantly and data is often distributed across different devices, organizations, or edge nodes. Consequently, it is imperative to develop models that can effectively generalize to unseen distributions where data is distributed across different domains. In response to this challenge, there has been a surge of interest in federated domain generalization (FDG) in recent years. FDG combines the strengths of federated learning (FL) and domain generalization (DG) techniques to enable multiple source domains to collaboratively learn a model capable of directly generalizing to unseen domains while preserving data privacy. However, generalizing the federated model under domain shifts is a technically challenging problem that has received scant attention in the research area so far. This paper presents the first survey of recent advances in this area. Initially, we discuss the development process from traditional machine learning to domain adaptation and domain generalization, leading to FDG as well as provide the corresponding formal definition. Then, we categorize recent methodologies into four classes: federated domain alignment, data manipulation, learning strategies, and aggregation optimization, and present suitable algorithms in detail for each category. Next, we introduce commonly used datasets, applications, evaluations, and benchmarks. Finally, we conclude this survey by providing some potential research topics for the future

    Seventh Biennial Report : June 2003 - March 2005

    No full text

    An Energy-Efficient and Reliable Data Transmission Scheme for Transmitter-based Energy Harvesting Networks

    Get PDF
    Energy harvesting technology has been studied to overcome a limited power resource problem for a sensor network. This paper proposes a new data transmission period control and reliable data transmission algorithm for energy harvesting based sensor networks. Although previous studies proposed a communication protocol for energy harvesting based sensor networks, it still needs additional discussion. Proposed algorithm control a data transmission period and the number of data transmission dynamically based on environment information. Through this, energy consumption is reduced and transmission reliability is improved. The simulation result shows that the proposed algorithm is more efficient when compared with previous energy harvesting based communication standard, Enocean in terms of transmission success rate and residual energy.This research was supported by Basic Science Research Program through the National Research Foundation by Korea (NRF) funded by the Ministry of Education, Science and Technology(2012R1A1A3012227)

    Foundations of Software Science and Computation Structures

    Get PDF
    This open access book constitutes the proceedings of the 23rd International Conference on Foundations of Software Science and Computational Structures, FOSSACS 2020, which took place in Dublin, Ireland, in April 2020, and was held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2020. The 31 regular papers presented in this volume were carefully reviewed and selected from 98 submissions. The papers cover topics such as categorical models and logics; language theory, automata, and games; modal, spatial, and temporal logics; type theory and proof theory; concurrency theory and process calculi; rewriting theory; semantics of programming languages; program analysis, correctness, transformation, and verification; logics of programming; software specification and refinement; models of concurrent, reactive, stochastic, distributed, hybrid, and mobile systems; emerging models of computation; logical aspects of computational complexity; models of software security; and logical foundations of data bases.

    Foundations of Software Science and Computation Structures

    Get PDF
    This open access book constitutes the proceedings of the 23rd International Conference on Foundations of Software Science and Computational Structures, FOSSACS 2020, which took place in Dublin, Ireland, in April 2020, and was held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2020. The 31 regular papers presented in this volume were carefully reviewed and selected from 98 submissions. The papers cover topics such as categorical models and logics; language theory, automata, and games; modal, spatial, and temporal logics; type theory and proof theory; concurrency theory and process calculi; rewriting theory; semantics of programming languages; program analysis, correctness, transformation, and verification; logics of programming; software specification and refinement; models of concurrent, reactive, stochastic, distributed, hybrid, and mobile systems; emerging models of computation; logical aspects of computational complexity; models of software security; and logical foundations of data bases.

    Phenomenological modeling of image irradiance for non-Lambertian surfaces under natural illumination.

    Get PDF
    Various vision tasks are usually confronted by appearance variations due to changes of illumination. For instance, in a recognition system, it has been shown that the variability in human face appearance is owed to changes to lighting conditions rather than person\u27s identity. Theoretically, due to the arbitrariness of the lighting function, the space of all possible images of a fixed-pose object under all possible illumination conditions is infinite dimensional. Nonetheless, it has been proven that the set of images of a convex Lambertian surface under distant illumination lies near a low dimensional linear subspace. This result was also extended to include non-Lambertian objects with non-convex geometry. As such, vision applications, concerned with the recovery of illumination, reflectance or surface geometry from images, would benefit from a low-dimensional generative model which captures appearance variations w.r.t. illumination conditions and surface reflectance properties. This enables the formulation of such inverse problems as parameter estimation. Typically, subspace construction boils to performing a dimensionality reduction scheme, e.g. Principal Component Analysis (PCA), on a large set of (real/synthesized) images of object(s) of interest with fixed pose but different illumination conditions. However, this approach has two major problems. First, the acquired/rendered image ensemble should be statistically significant vis-a-vis capturing the full behavior of the sources of variations that is of interest, in particular illumination and reflectance. Second, the curse of dimensionality hinders numerical methods such as Singular Value Decomposition (SVD) which becomes intractable especially with large number of large-sized realizations in the image ensemble. One way to bypass the need of large image ensemble is to construct appearance subspaces using phenomenological models which capture appearance variations through mathematical abstraction of the reflection process. In particular, the harmonic expansion of the image irradiance equation can be used to derive an analytic subspace to represent images under fixed pose but different illumination conditions where the image irradiance equation has been formulated in a convolution framework. Due to their low-frequency nature, irradiance signals can be represented using low-order basis functions, where Spherical Harmonics (SH) has been extensively adopted. Typically, an ideal solution to the image irradiance (appearance) modeling problem should be able to incorporate complex illumination, cast shadows as well as realistic surface reflectance properties, while moving away from the simplifying assumptions of Lambertian reflectance and single-source distant illumination. By handling arbitrary complex illumination and non-Lambertian reflectance, the appearance model proposed in this dissertation moves the state of the art closer to the ideal solution. This work primarily addresses the geometrical compliance of the hemispherical basis for representing surface reflectance while presenting a compact, yet accurate representation for arbitrary materials. To maintain the plausibility of the resulting appearance, the proposed basis is constructed in a manner that satisfies the Helmholtz reciprocity property while avoiding high computational complexity. It is believed that having the illumination and surface reflectance represented in the spherical and hemispherical domains respectively, while complying with the physical properties of the surface reflectance would provide better approximation accuracy of image irradiance when compared to the representation in the spherical domain. Discounting subsurface scattering and surface emittance, this work proposes a surface reflectance basis, based on hemispherical harmonics (HSH), defined on the Cartesian product of the incoming and outgoing local hemispheres (i.e. w.r.t. surface points). This basis obeys physical properties of surface reflectance involving reciprocity and energy conservation. The basis functions are validated using analytical reflectance models as well as scattered reflectance measurements which might violate the Helmholtz reciprocity property (this can be filtered out through the process of projecting them on the subspace spanned by the proposed basis, where the reciprocity property is preserved in the least-squares sense). The image formation process of isotropic surfaces under arbitrary distant illumination is also formulated in the frequency space where the orthogonality relation between illumination and reflectance bases is encoded in what is termed as irradiance harmonics. Such harmonics decouple the effect of illumination and reflectance from the underlying pose and geometry. Further, a bilinear approach to analytically construct irradiance subspace is proposed in order to tackle the inherent problem of small-sample-size and curse of dimensionality. The process of finding the analytic subspace is posed as establishing a relation between its principal components and that of the irradiance harmonics basis functions. It is also shown how to incorporate prior information about natural illumination and real-world surface reflectance characteristics in order to capture the full behavior of complex illumination and non-Lambertian reflectance. The use of the presented theoretical framework to develop practical algorithms for shape recovery is further presented where the hitherto assumed Lambertian assumption is relaxed. With a single image of unknown general illumination, the underlying geometrical structure can be recovered while accounting explicitly for object reflectance characteristics (e.g. human skin types for facial images and teeth reflectance for human jaw reconstruction) as well as complex illumination conditions. Experiments on synthetic and real images illustrate the robustness of the proposed appearance model vis-a-vis illumination variation. Keywords: computer vision, computer graphics, shading, illumination modeling, reflectance representation, image irradiance, frequency space representations, {hemi)spherical harmonics, analytic bilinear PCA, model-based bilinear PCA, 3D shape reconstruction, statistical shape from shading

    Remote Sensing for Precision Nitrogen Management

    Get PDF
    This book focuses on the fundamental and applied research of the non-destructive estimation and diagnosis of crop leaf and plant nitrogen status and in-season nitrogen management strategies based on leaf sensors, proximal canopy sensors, unmanned aerial vehicle remote sensing, manned aerial remote sensing and satellite remote sensing technologies. Statistical and machine learning methods are used to predict plant-nitrogen-related parameters with sensor data or sensor data together with soil, landscape, weather and/or management information. Different sensing technologies or different modelling approaches are compared and evaluated. Strategies are developed to use crop sensing data for in-season nitrogen recommendations to improve nitrogen use efficiency and protect the environment
    corecore