4,153 research outputs found

    Computational Intelligence Methodologies for Soft Sensors Development in Industrial Processes

    Get PDF
    Tese de doutoramento em Engenharia Electrotécnica e de Computadores, no ramo de especialização em Automação e Robótica, apresentada ao Departamento de Engenharia Electrotécnica e de Computadores da Faculdade de Ciências e Tecnologia da Universidade de Coimbra.Sensores virtuais são modelos inferenciais que utilizam sensores disponíveis online (e.g.\ temperatura, pressão, vazão, etc) para prever variáveis relacionadas com a qualidade do processo, que não podem ser medidas de forma automática, ou só podem ser medidas por um custo elevado, de forma esporádica, ou com longos atrasos (e.g.\ análises laboratoriais). Sensores virtuais são construídos usando usando os dados históricos de processo, geralmente fornecidos pelo sistema de controle de supervisão e aquisição de dados (SCADA) e pelas anotações das medições de laboratório. No desenvolvimento dos sensores virtuais, há muitas questões para lidar. As principais questões são o tratamento de dados em falta, a detecção de outliers, a selecção das variáveis de entrada, o treino do modelo, a validação, e a manutenção do sensor virtual. Esta tese centra-se em três destas questões, nomeadamente, a selecção de variáveis de entrada, o treino do modelo e a manutenção do sensor virtual. Novas metodologias são propostas em cada uma destas áreas. A selecção das variáveis de entrada é baseada na rede neuronal multilayer perceptron (o modelo de regressão não linear mais popular em aplicações de sensores virtuais). A segunda questão, o treino do modelo, é tratado no contexto de múltiplos modos de operação. Exemplos de múltiplos modos de operação são a variação da carga diurna de uma central de produção energia, a operação verão-inverno de uma refinaria, etc. Nesta tese, para treinar um modelo no contexto dos múltiplos modos de operação, o modelo de regressão por mínimos quadrados parciais (PLS), um método muito difundido na literatura de quimiometria e um dos métodos mais utilizados na indústria, é inserido no método de mistura de especialistas (ME), derivando assim o método a mistura de modelos mínimos quadrados parciais (Mix-PLS) especialistas. O terceiro problema está relacionado com a manutenção do sensor virtual. Na manutenção do sensor virtual, o modelo é actualizado utilizando amostras recentes do processo. A maneira mais comum de fazê-lo é através de aprendizagem exponencial-recursiva dos parâmetros do modelo. Na aprendizagem exponencial-recursiva, é utilizado um factor de esquecimento para dar exponencialmente menos pesos para as amostras mais antigas. O factor de esquecimento está directamente relacionado com o número "eficaz" de amostras, e valores baixos do factor de esquecimento podem proporcionar os mesmos problemas enfrentados na modelação de sistemas estáticos, tais como overfitting, mau desempenho de predição, etc. Para resolver este problema, um novo modelo, baseado na mistura de modelos de regressão linear univariados (portanto de baixa dimensionalidade), é proposto, permitindo a utilização de baixos valores de factor de esquecimento. Todos os métodos propostos nesta tese são testado em conjuntos de dados obtidos de processos reais. Cada método proposto é comparado com os respectivos métodos do estado da arte, validando assim as abordagens propostas.Data-driven soft sensors are inferential models that use on-line available sensors (e.g. temperature, pressure, flow rate, etc) to predict quality variables which cannot be automatically measured at all, or can only be measured at high cost, sporadically, or with high delays (e.g. laboratory analysis). Soft sensors are built using historical data of the process, usually provided from the supervisory control and data acquisition (SCADA) system or obtained from the laboratory annotations/measurements. In the soft sensor development, there are many issues to deal with. The main issues are the treatment of missing data, outliers detection, selection of input variables, model training, validation, and soft sensor maintenance. This thesis focuses on three of these issues, namely the selection of input variables, model training, and soft sensor maintenance. Novel methodologies are proposed in each of these areas. The selection of input variables is based on the multilayer perceptron (MLP) neural network model (the most popular non-linear regression model in soft sensors applications). The second issue, the model training, is addressed in the context of multiple operating modes. Examples of multiple operating modes are diurnal load variation of a power plant, summer-winter operation of a refinery, etc. In this thesis, to train a model in the context of multiple modes context, the partial least squares regression (PLS), a well know method in the chemometrics literature and one of the mostly used methods in industry, is inserted into the mixture of experts (ME) framework, deriving so the mixture of partial least square (Mix-PLS) regression. The Mix-PLS is able to characterize multiple operating modes. The third problem is related to soft sensor maintenance. In soft sensor maintenance, the model is updated using recent samples of the process. The most common way to do so is by the exponentially recursive learning of parameters, using the incoming samples of the process. In exponentially recursive learning, a forgetting factor is used to give exponentially less weight to older samples. In many applications, small values of the forgetting factor can lead to better predictive performance. However, the forgetting factor is directly related to the “effective” number of samples, and low values of forgetting factor can bring the same problem faced when modeling static systems, such as overfitting, poor prediction performance, etc. To solve this problem, a new model, based on the mixture of univariate (thus low dimensional) linear regression models is proposed (MULRM), allowing the use of small values of forgetting factor. All the methods proposed in this thesis are evaluated in soft sensors data sets coming from real-world processes. Each of the proposed methods is compared with the corresponding state of the art methods, thus validating the proposed approaches.FCT - SFRH/BD/63454/200

    Signal and data processing for machine olfaction and chemical sensing: A review

    Get PDF
    Signal and data processing are essential elements in electronic noses as well as in most chemical sensing instruments. The multivariate responses obtained by chemical sensor arrays require signal and data processing to carry out the fundamental tasks of odor identification (classification), concentration estimation (regression), and grouping of similar odors (clustering). In the last decade, important advances have shown that proper processing can improve the robustness of the instruments against diverse perturbations, namely, environmental variables, background changes, drift, etc. This article reviews the advances made in recent years in signal and data processing for machine olfaction and chemical sensing

    Modeling Long- and Short-Term Temporal Patterns with Deep Neural Networks

    Full text link
    Multivariate time series forecasting is an important machine learning problem across many domains, including predictions of solar plant energy output, electricity consumption, and traffic jam situation. Temporal data arise in these real-world applications often involves a mixture of long-term and short-term patterns, for which traditional approaches such as Autoregressive models and Gaussian Process may fail. In this paper, we proposed a novel deep learning framework, namely Long- and Short-term Time-series network (LSTNet), to address this open challenge. LSTNet uses the Convolution Neural Network (CNN) and the Recurrent Neural Network (RNN) to extract short-term local dependency patterns among variables and to discover long-term patterns for time series trends. Furthermore, we leverage traditional autoregressive model to tackle the scale insensitive problem of the neural network model. In our evaluation on real-world data with complex mixtures of repetitive patterns, LSTNet achieved significant performance improvements over that of several state-of-the-art baseline methods. All the data and experiment codes are available online.Comment: Accepted by SIGIR 201

    On-line anomaly detection with advanced independent component analysis of multi-variate residual signals from causal relation networks.

    Get PDF
    Anomaly detection in todays industrial environments is an ambitious challenge to detect possible faults/problems which may turn into severe waste during production, defects, or systems components damage, at an early stage. Data-driven anomaly detection in multi-sensor networks rely on models which are extracted from multi-sensor measurements and which characterize the anomaly-free reference situation. Therefore, significant deviations to these models indicate potential anomalies. In this paper, we propose a new approach which is based on causal relation networks (CRNs) that represent the inner causes and effects between sensor channels (or sensor nodes) in form of partial sub-relations, and evaluate its functionality and performance on two distinct production phases within a micro-fluidic chip manufacturing scenario. The partial relations are modeled by non-linear (fuzzy) regression models for characterizing the (local) degree of influences of the single causes on the effects. An advanced analysis of the multi-variate residual signals, obtained from the partial relations in the CRNs, is conducted. It employs independent component analysis (ICA) to characterize hidden structures in the fused residuals through independent components (latent variables) as obtained through the demixing matrix. A significant change in the energy content of latent variables, detected through automated control limits, indicates an anomaly. Suppression of possible noise content in residuals—to decrease the likelihood of false alarms—is achieved by performing the residual analysis solely on the dominant parts of the demixing matrix. Our approach could detect anomalies in the process which caused bad quality chips (with the occurrence of malfunctions) with negligible delay based on the process data recorded by multiple sensors in two production phases: injection molding and bonding, which are independently carried out with completely different process parameter settings and on different machines (hence, can be seen as two distinct use cases). Our approach furthermore i.) produced lower false alarm rates than several related and well-known state-of-the-art methods for (unsupervised) anomaly detection, and ii.) also caused much lower parametrization efforts (in fact, none at all). Both aspects are essential for the useability of an anomaly detection approach

    Augmenting Adaptation with Retrospective Model Correction for Non-Stationary Regression Problems

    Get PDF
    Existing adaptive predictive methods often use multiple adaptive mechanisms as part of their coping strategy in non-stationary environments. We address a scenario when selective deployment of these adaptive mechanisms is possible. In this case, deploying each adaptive mechanism results in different candidate models, and only one of these candidates is chosen to make predictions on the subsequent data. After observing the error of each of candidate, it is possible to revert the current model to the one which had the least error. We call this strategy retrospective model correction. In this work we aim to investigate the benefits of such approach. As a vehicle for the investigation we use an adaptive ensemble method for regression in batch learning mode which employs several adaptive mechanisms to react to changes in the data. Using real world data from the process industry we show empirically that the retrospective model correction is indeed beneficial for the predictive accuracy, especially for the weaker adaptive mechanisms

    Soft sensor development for real-time process monitoring of multidimensional fractionation in tubular centrifuges

    Get PDF
    High centrifugal acceleration and throughput rates of tubular centrifuges enable the solid–liquid size separation and fractionation of nanoparticles on a bench scale. Nowadays, advantageous product properties are defined by precise specifications regarding particle size and material composition. Hence, there is a demand for innovative and efficient downstream processing of complex particle suspensions. With this type of centrifuge working in a semi-continuous mode, an online observation of the separation quality is needed for optimization purposes. To analyze the composition of fines downstream of the centrifuge, a UV/vis soft sensor is developed to monitor the sorting of polymer and metal oxide nanoparticles by their size and density. By spectroscopic multi-component analysis, a measured UV/vis signal is translated into a model based prediction of the relative solids volume fraction of the fines. High signal stability and an adaptive but mandatory calibration routine enable the presented setup to accurately predict the product’s composition at variable operating conditions. It is outlined how this software-based UV/vis sensor can be utilized effectively for challenging real-time process analytics in multi-component suspension processing. The setup provides insight into the underlying process dynamics and assists in optimizing the outcome of separation tasks on the nanoscale

    Seizure prediction : ready for a new era

    Get PDF
    Acknowledgements: The authors acknowledge colleagues in the international seizure prediction group for valuable discussions. L.K. acknowledges funding support from the National Health and Medical Research Council (APP1130468) and the James S. McDonnell Foundation (220020419) and acknowledges the contribution of Dean R. Freestone at the University of Melbourne, Australia, to the creation of Fig. 3.Peer reviewedPostprin

    Fukunaga-Koontz feature transformation for statistical structural damage detection and hierarchical neuro-fuzzy damage localisation

    Get PDF
    Piotr Omenzetter and Simon Hoell’s work on this paper within the Lloyd’s Register Foundation Centre for Safety and Reliability Engineering at the University of Aberdeen was supported by Lloyd’s Register Foundation. The Foundation helps to protect life and property by supporting engineering-related education, public engagement and the application of research.Peer reviewedPostprin

    Multiple Adaptive Mechanisms for Data-driven Soft Sensors.

    Get PDF
    Recent data-driven soft sensors often use multiple adaptive mechanisms to cope with non-stationary environments. These mechanisms are usually deployed in a prescribed order which does not change. In this work we use real world data from the process industry to compare deploying adaptive mechanisms in a fixed manner to deploying them in a flexible way, which results in varying adaptation sequences. We demonstrate that flexible deployment of available adaptive methods coupled with techniques such as cross-validatory selection and retrospective model correction, can benefit the predictive accuracy over time. As a vehicle for this study, we use a soft-sensor for batch processes based on an adaptive ensemble method which employs several adaptive mechanisms to react to the changes in data

    A Review of Meta-level Learning in the Context of Multi-component, Multi-level Evolving Prediction Systems.

    Get PDF
    The exponential growth of volume, variety and velocity of data is raising the need for investigations of automated or semi-automated ways to extract useful patterns from the data. It requires deep expert knowledge and extensive computational resources to find the most appropriate mapping of learning methods for a given problem. It becomes a challenge in the presence of numerous configurations of learning algorithms on massive amounts of data. So there is a need for an intelligent recommendation engine that can advise what is the best learning algorithm for a dataset. The techniques that are commonly used by experts are based on a trial and error approach evaluating and comparing a number of possible solutions against each other, using their prior experience on a specific domain, etc. The trial and error approach combined with the expert’s prior knowledge, though computationally and time expensive, have been often shown to work for stationary problems where the processing is usually performed off-line. However, this approach would not normally be feasible to apply on non-stationary problems where streams of data are continuously arriving. Furthermore, in a non-stationary environment the manual analysis of data and testing of various methods every time when there is a change in the underlying data distribution would be very difficult or simply infeasible. In that scenario and within an on-line predictive system, there are several tasks where Meta-learning can be used to effectively facilitate best recommendations including: 1) pre processing steps, 2) learning algorithms or their combination, 3) adaptivity mechanisms and their parameters, 4) recurring concept extraction, and 5) concept drift detection. However, while conceptually very attractive and promising, the Meta-learning leads to several challenges with the appropriate representation of the problem at a meta-level being one of the key ones. The goal of this review and our research is, therefore, to investigate Meta learning in general and the associated challenges in the context of automating the building, deployment and adaptation of multi-level and multi-component predictive system that evolve over time
    corecore