468 research outputs found

    Uncertainty and Interpretability Studies in Soft Computing with an Application to Complex Manufacturing Systems

    Get PDF
    In systems modelling and control theory, the benefits of applying neural networks have been extensively studied. Particularly in manufacturing processes, such as the prediction of mechanical properties of heat treated steels. However, modern industrial processes usually involve large amounts of data and a range of non-linear effects and interactions that might hinder their model interpretation. For example, in steel manufacturing the understanding of complex mechanisms that lead to the mechanical properties which are generated by the heat treatment process is vital. This knowledge is not available via numerical models, therefore an experienced metallurgist estimates the model parameters to obtain the required properties. This human knowledge and perception sometimes can be imprecise leading to a kind of cognitive uncertainty such as vagueness and ambiguity when making decisions. In system classification, this may be translated into a system deficiency - for example, small input changes in system attributes may result in a sudden and inappropriate change for class assignation. In order to address this issue, practitioners and researches have developed systems that are functional equivalent to fuzzy systems and neural networks. Such systems provide a morphology that mimics the human ability of reasoning via the qualitative aspects of fuzzy information rather by its quantitative analysis. Furthermore, these models are able to learn from data sets and to describe the associated interactions and non-linearities in the data. However, in a like-manner to neural networks, a neural fuzzy system may suffer from a lost of interpretability and transparency when making decisions. This is mainly due to the application of adaptive approaches for its parameter identification. Since the RBF-NN can be treated as a fuzzy inference engine, this thesis presents several methodologies that quantify different types of uncertainty and its influence on the model interpretability and transparency of the RBF-NN during its parameter identification. Particularly, three kind of uncertainty sources in relation to the RBF-NN are studied, namely: entropy, fuzziness and ambiguity. First, a methodology based on Granular Computing (GrC), neutrosophic sets and the RBF-NN is presented. The objective of this methodology is to quantify the hesitation produced during the granular compression at the low level of interpretability of the RBF-NN via the use of neutrosophic sets. This study also aims to enhance the disitnguishability and hence the transparency of the initial fuzzy partition. The effectiveness of the proposed methodology is tested against a real case study for the prediction of the properties of heat-treated steels. Secondly, a new Interval Type-2 Radial Basis Function Neural Network (IT2-RBF-NN) is introduced as a new modelling framework. The IT2-RBF-NN takes advantage of the functional equivalence between FLSs of type-1 and the RBF-NN so as to construct an Interval Type-2 Fuzzy Logic System (IT2-FLS) that is able to deal with linguistic uncertainty and perceptions in the RBF-NN rule base. This gave raise to different combinations when optimising the IT2-RBF-NN parameters. Finally, a twofold study for uncertainty assessment at the high-level of interpretability of the RBF-NN is provided. On the one hand, the first study proposes a new methodology to quantify the a) fuzziness and the b) ambiguity at each RU, and during the formation of the rule base via the use of neutrosophic sets theory. The aim of this methodology is to calculate the associated fuzziness of each rule and then the ambiguity related to each normalised consequence of the fuzzy rules that result from the overlapping and to the choice with one-to-many decisions respectively. On the other hand, a second study proposes a new methodology to quantify the entropy and the fuzziness that come out from the redundancy phenomenon during the parameter identification. To conclude this work, the experimental results obtained through the application of the proposed methodologies for modelling two well-known benchmark data sets and for the prediction of mechanical properties of heat-treated steels conducted to publication of three articles in two peer-reviewed journals and one international conference

    Unsupervised tracking of time-evolving data streams and an application to short-term urban traffic flow forecasting

    Get PDF
    I am indebted to many people for their help and support I receive during my Ph.D. study and research at DIBRIS-University of Genoa. First and foremost, I would like to express my sincere thanks to my supervisors Prof.Dr. Masulli, and Prof.Dr. Rovetta for the invaluable guidance, frequent meetings, and discussions, and the encouragement and support on my way of research. I thanks all the members of the DIBRIS for their support and kindness during my 4 years Ph.D. I would like also to acknowledge the contribution of the projects Piattaforma per la mobili\ue0 Urbana con Gestione delle INformazioni da sorgenti eterogenee (PLUG-IN) and COST Action IC1406 High Performance Modelling and Simulation for Big Data Applications (cHiPSet). Last and most importantly, I wish to thanks my family: my wife Shaimaa who stays with me through the joys and pains; my daughter and son whom gives me happiness every-day; and my parents for their constant love and encouragement

    Clustering of nonstationary data streams: a survey of fuzzy partitional methods

    Get PDF
    YesData streams have arisen as a relevant research topic during the past decade. They are real‐time, incremental in nature, temporally ordered, massive, contain outliers, and the objects in a data stream may evolve over time (concept drift). Clustering is often one of the earliest and most important steps in the streaming data analysis workflow. A comprehensive literature is available about stream data clustering; however, less attention is devoted to the fuzzy clustering approach, even though the nonstationary nature of many data streams makes it especially appealing. This survey discusses relevant data stream clustering algorithms focusing mainly on fuzzy methods, including their treatment of outliers and concept drift and shift.Ministero dell‘Istruzione, dell‘Universitá e della Ricerca

    Solving trajectory optimization problems in the presence of probabilistic constraints

    Get PDF
    The objective of this paper is to present an approximation-based strategy for solving the problem of nonlinear trajectory optimization with the consideration of probabilistic constraints. The proposed method defines a smooth and differentiable function to replace probabilistic constraints by the deterministic ones, thereby converting the chance-constrained trajectory optimization model into a parametric nonlinear programming model. In addition, it is proved that the approximation function and the corresponding approximation set will converge to that of the original problem. Furthermore, the optimal solution of the approximated model is ensured to converge to the optimal solution of the original problem. Numerical results, obtained from a new chance-constrained space vehicle trajectory optimization model and a 3-D unmanned vehicle trajectory smoothing problem, verify the feasibility and effectiveness of the proposed approach. Comparative studies were also carried out to show the proposed design can yield good performance and outperform other typical chance-constrained optimization techniques investigated in this paper

    State of AI-based monitoring in smart manufacturing and introduction to focused section

    Get PDF
    Over the past few decades, intelligentization, supported by artificial intelligence (AI) technologies, has become an important trend for industrial manufacturing, accelerating the development of smart manufacturing. In modern industries, standard AI has been endowed with additional attributes, yielding the so-called industrial artificial intelligence (IAI) that has become the technical core of smart manufacturing. AI-powered manufacturing brings remarkable improvements in many aspects of closed-loop production chains from manufacturing processes to end product logistics. In particular, IAI incorporating domain knowledge has benefited the area of production monitoring considerably. Advanced AI methods such as deep neural networks, adversarial training, and transfer learning have been widely used to support both diagnostics and predictive maintenance of the entire production process. It is generally believed that IAI is the critical technologies needed to drive the future evolution of industrial manufacturing. This article offers a comprehensive overview of AI-powered manufacturing and its applications in monitoring. More specifically, it summarizes the key technologies of IAI and discusses their typical application scenarios with respect to three major aspects of production monitoring: fault diagnosis, remaining useful life prediction, and quality inspection. In addition, the existing problems and future research directions of IAI are also discussed. This article further introduces the papers in this focused section on AI-based monitoring in smart manufacturing by weaving them into the overview, highlighting how they contribute to and extend the body of literature in this area

    Pharmaceutical development and manufacturing in a Quality by Design perspective: methodologies for design space description

    Get PDF
    In the last decade, the pharmaceutical industry has been experiencing a period of drastic change in the way new products and processes are being conceived, due to the introduction of the Quality by design (QbD) initiative put forth by the pharmaceutical regulatory agencies (such as the Food and Drug Adminstration (FDA) and the European Medicines Agency (EMA)). One of the most important aspects introduced in the QbD framework is that of design space (DS) of a pharmaceutical product, defined as “the multidimensional combination and interaction of input variables (e.g. material attributes) and process parameters that have been demonstrated to provide assurance of quality”. The identification of the DS represents a key advantage for pharmaceutical companies, since once the DS has been approved by the regulatory agency, movements within the DS do not constitute a manufacturing change and therefore do not require any further regulatory post-approval. This translates into an enhanced flexibility during process operation, with significant advantages in terms of productivity and process economics. Mathematical modeling, both first-principles and data-driven, has proven to be a valuable tool to assist a DS identification exercise. The development of advanced mathematical techniques for the determination and maintenance of a design space, as well as the quantification of the uncertainty associated with its identification, is a research area that has gained increasing attention during the last years. The objective of this Dissertation is to develop novel methodologies to assist the (i) determination of the design space of a new pharmaceutical product, (ii) quantify the assurance of quality for a new pharmaceutical product as advocated by the regulatory agencies, (iii) adapt and maintain a design space during plant operation, and (iv) design optimal experiments for the calibration of first-principles mathematical models to be used for design space identification. With respect to the issue of design space determination, a methodology is proposed that combines surrogate-based feasibility analysis and latent-variable modeling for the identification of the design space of a new pharmaceutical product. Projection onto latent structures (PLS) is exploited to obtain a latent representation of the space identified by the model inputs (i.e. raw material properties and process parameters) and surrogate-based feasibility is then used to reconstruct the boundary of the DS on this latent representation, with significant reduction of the overall computational burden. The final result is a compact representation of the DS that can be easily expressed in terms of the original physically-relevant input variables (process parameters and raw material properties) and can then be easily interpreted by industrial practitioners. As regards the quantification of “assurance” of quality, two novel methodologies are proposed to account for the two most common sources of model uncertainty (structural and parametric) in the model-based identification of the DS of a new pharmaceutical product. The first methodology is specifically suited for the quantification of assurance of quality when a PLS model is to be used for DS identification. Two frequentist analytical models are proposed to back-propagate the uncertainty from the quality attributes of the final product to the space identified by the set of raw material properties and process parameters of the manufacturing process. It is shown how these models can be used to identify a subset of input combinations (i.e., raw material properties and process parameters) within which the DS is expected to lie with a given degree of confidence. It is also shown how this reduced space of input combinations (called experiment space) can be used to tailor an experimental campaign for the final assessment of the DS, with a significant reduction of the experimental effort required with respect to a non-tailored experimental campaign. The validity of the proposed methodology is tested on granulation and roll compaction processes, involving both simulated and experimental data. The second methodology proposes a joint Bayesian/latent-variable approach, and the assurance of quality is quantified in terms of the probability that the final product will meet its specifications. In this context, the DS is defined in a probabilistic framework as the set of input combinations that guarantee that the probability that the product will meet its quality specifications is greater than a predefined threshold value. Bayesian multivariate linear regression is coupled with latent-variable modeling in order to obtain a computationally friendly implementation of this probabilistic DS. Specifically, PLS is exploited to reduce the computational burden for the discretization of the input domain and to give a compact representation of the DS. On the other hand, Bayesian multivariate linear regression is used to compute the probability that the product will meet the desired quality for each of the discretization points of the input domain. The ability of the methodology to give a scientifically-driven representation of the probabilistic DS is proved with three case studies involving literature experimental data of pharmaceutical unit operations. With respect to the issue of the maintenance of a design space, a methodology is proposed to adapt in real time a model-based representation of a design space during plant operation in the presence of process-model mismatch. Based on the availability of a first-principles model (FPM) or semi-empirical model for the manufacturing process, together with measurements from plant sensors, the methodology jointly exploits (i) a dynamic state estimator and (ii) feasibility analysis to perform a risk-based online maintenance of the DS. The state estimator is deployed to obtain an up-to-date FPM by adjusting in real-time a small subset of the model parameters. Feasibility analysis and surrogate-based feasibility analysis are used to update the DS in real-time by exploiting the up-to-date FPM returned by the state estimator. The effectiveness of the methodology is shown with two simulated case studies, namely the roll compaction of microcrystalline cellulose and the penicillin fermentation in a pilot scale bioreactor. As regards the design of optimal experiments for the calibration of mathematical models for DS identification, a model-based design of experiments (MBDoE) approach is presented for an industrial freeze-drying process. A preliminary analysis is performed to choose the most suitable process model between different model alternatives and to test the structural consistency of the chosen model. A new experiment is then designed based on this model using MBDoE techniques, in order to increase the precision of the estimates of the most influential model parameters. The results of the MBDoE activity are then tested both in silico and on the real equipment

    Iterative learning control of crystallisation systems

    Get PDF
    Under the increasing pressure of issues like reducing the time to market, managing lower production costs, and improving the flexibility of operation, batch process industries thrive towards the production of high value added commodity, i.e. specialty chemicals, pharmaceuticals, agricultural, and biotechnology enabled products. For better design, consistent operation and improved control of batch chemical processes one cannot ignore the sensing and computational blessings provided by modern sensors, computers, algorithms, and software. In addition, there is a growing demand for modelling and control tools based on process operating data. This study is focused on developing process operation data-based iterative learning control (ILC) strategies for batch processes, more specifically for batch crystallisation systems. In order to proceed, the research took a step backward to explore the existing control strategies, fundamentals, mechanisms, and various process analytical technology (PAT) tools used in batch crystallisation control. From the basics of the background study, an operating data-driven ILC approach was developed to improve the product quality from batch-to-batch. The concept of ILC is to exploit the repetitive nature of batch processes to automate recipe updating using process knowledge obtained from previous runs. The methodology stated here was based on the linear time varying (LTV) perturbation model in an ILC framework to provide a convergent batch-to-batch improvement of the process performance indicator. In an attempt to create uniqueness in the research, a novel hierarchical ILC (HILC) scheme was proposed for the systematic design of the supersaturation control (SSC) of a seeded batch cooling crystalliser. This model free control approach is implemented in a hierarchical structure by assigning data-driven supersaturation controller on the upper level and a simple temperature controller in the lower level. In order to familiarise with other data based control of crystallisation processes, the study rehearsed the existing direct nucleation control (DNC) approach. However, this part was more committed to perform a detailed strategic investigation of different possible structures of DNC and to compare the results with that of a first principle model based optimisation for the very first time. The DNC results in fact outperformed the model based optimisation approach and established an ultimate guideline to select the preferable DNC structure. Batch chemical processes are distributed as well as nonlinear in nature which need to be operated over a wide range of operating conditions and often near the boundary of the admissible region. As the linear lumped model predictive controllers (MPCs) often subject to severe performance limitations, there is a growing demand of simple data driven nonlinear control strategy to control batch crystallisers that will consider the spatio-temporal aspects. In this study, an operating data-driven polynomial chaos expansion (PCE) based nonlinear surrogate modelling and optimisation strategy was presented for batch crystallisation processes. Model validation and optimisation results confirmed this approach as a promise to nonlinear control. The evaluations of the proposed data based methodologies were carried out by simulation case studies, laboratory experiments and industrial pilot plant experiments. For all the simulation case studies a detailed mathematical models covering reaction kinetics and heat mass balances were developed for a batch cooling crystallisation system of Paracetamol in water. Based on these models, rigorous simulation programs were developed in MATLABÂź, which was then treated as the real batch cooling crystallisation system. The laboratory experimental works were carried out using a lab scale system of Paracetamol and iso-Propyl alcohol (IPA). All the experimental works including the qualitative and quantitative monitoring of the crystallisation experiments and products demonstrated an inclusive application of various in situ process analytical technology (PAT) tools, such as focused beam reflectance measurement (FBRM), UV/Vis spectroscopy and particle vision measurement (PVM) as well. The industrial pilot scale study was carried out in GlaxoSmithKline Bangladesh Limited, Bangladesh, and the system of experiments was Paracetamol and other powdered excipients used to make paracetamol tablets. The methodologies presented in this thesis provide a comprehensive framework for data-based dynamic optimisation and control of crystallisation processes. All the simulation and experimental evaluations of the proposed approaches emphasised the potential of the data-driven techniques to provide considerable advances in the current state-of-the-art in crystallisation control

    A review of optimization techniques in spacecraft flight trajectory design

    Get PDF
    For most atmospheric or exo-atmospheric spacecraft flight scenarios, a well-designed trajectory is usually a key for stable flight and for improved guidance and control of the vehicle. Although extensive research work has been carried out on the design of spacecraft trajectories for different mission profiles and many effective tools were successfully developed for optimizing the flight path, it is only in the recent five years that there has been a growing interest in planning the flight trajectories with the consideration of multiple mission objectives and various model errors/uncertainties. It is worth noting that in many practical spacecraft guidance, navigation and control systems, multiple performance indices and different types of uncertainties must frequently be considered during the path planning phase. As a result, these requirements bring the development of multi-objective spacecraft trajectory optimization methods as well as stochastic spacecraft trajectory optimization algorithms. This paper aims to broadly review the state-of-the-art development in numerical multi-objective trajectory optimization algorithms and stochastic trajectory planning techniques for spacecraft flight operations. A brief description of the mathematical formulation of the problem is firstly introduced. Following that, various optimization methods that can be effective for solving spacecraft trajectory planning problems are reviewed, including the gradient-based methods, the convexification-based methods, and the evolutionary/metaheuristic methods. The multi-objective spacecraft trajectory optimization formulation, together with different class of multi-objective optimization algorithms, is then overviewed. The key features such as the advantages and disadvantages of these recently-developed multi-objective techniques are summarised. Moreover, attentions are given to extend the original deterministic problem to a stochastic version. Some robust optimization strategies are also outlined to deal with the stochastic trajectory planning formulation. In addition, a special focus will be given on the recent applications of the optimized trajectory. Finally, some conclusions are drawn and future research on the development of multi-objective and stochastic trajectory optimization techniques is discussed
    • 

    corecore