3,681 research outputs found

    Selection of sensors by a new methodology coupling a classification technique and entropy criteria

    Get PDF
    Complex industrial processes invest a lot of money in sensors and automation devices to monitor and supervise the process in order to guarantee the production quality and the plant and operators safety. Fault detection is one of the multiple tasks of process monitoring and it critically depends on the sensors that measure the significant process variables. Nevertheless, most of the works on fault detection and diagnosis found in literature emphasis more on developing procedures to perform diagnosis given a set of sensors, and less on determining the actual location of sensors for efficient identification of faults. A methodology based on learning and classification techniques and on the information quantity measured by the Entropy concept, is proposed in order to address the problem of sensor location for fault identification. The proposed methodology has been applied to a continuous intensified reactor, the "Open Plate Reactor (OPR)", developed by Alfa Laval and studied at the Laboratory of Chemical Engineering of Toulouse. The different steps of the methodology are explained through its application to the carrying out of an exothermic reaction

    Принципы создания прототипа цифрового двойника процесса алкилирования бензола пропиленом на основе нейронной сети

    Get PDF
    Objectives. To identify the principles of creating digital twins of an operating technological unit along the example of the process of liquid-phase alkylation of benzene with propylene, and to establish the sequence of stages of formation of a digital twin, which can be applied to optimize oil and gas chemical production.Methods. The chemical and technological system consisting of reactor, mixer, heat exchangers, separator, rectification columns, and pump is considered as a complex high-level system. Data was acquired in order to describe the functioning of the isopropylbenzene production unit. The main parameters of the process were calculated by simulation modeling using UniSim® Design software. A neural network model was developed and trained. The influence of various factors of the reaction process of alkylation, separation of reaction products, and evaluation of economic factors providing market interest of the industrial process was also considered. The adequacy of calculations was determined by statistics methods. A microcontroller prototype of the process was created.Results. A predictive neural network model and its creation algorithm for the process of benzene alkylation was developed. This model can be loaded into a microcontroller to allow for real-time determination of the economic efficiency of plant operation and automated optimization depending on the following factors: composition of incoming raw materials; the technological mode of the plant; the temperature mode of the process; and the pressure in the reactor.Conclusions. The model of a complex chemicotechnological system of cumene production, created and calibrated on the basis of long-term industrial data and the results of calculations of the output parameters, enables the parameters of the technological process of alkylation to be calculated (yield of reaction products, energy costs, conditional profit at the output of finished products). During the development of a hardware-software prototype, adapted to the operation of the real plant, the principles and stages of creating a digital twin of the operating systems of chemical technology industries were identified and formulated.Цели. Выявление принципов создания цифровых двойников реально действующей технологической установки на примере процесса жидкофазного алкилирования бензола пропиленом и установление последовательности этапов формирования цифрового двойника, которая может быть применима для оптимизации работы нефтегазохимического производства.Методы. Рассмотрена в целом химико-технологическая система, состоящая из реактора, смесителя, теплообменников, сепаратора, ректификационных колонн и насоса, как система высокого уровня. Выполнен сбор данных, описывающих функционирование установки получения изопропилбензола алкилированием бензола пропиленом путем расчета основных параметров процесса с помощью имитационного моделирования с применением специализированного программного обеспечения UniSim® Design. Разработана и обучена нейросетевая модель, учитывающая влияние различных факторов реакционного процесса алкилирования, разделения продуктов реакции и оценки экономических факторов, обеспечивающих рыночную привлекательность рассматриваемого промышленного процесса. Определена адекватность результатов расчетов оптимальных параметров процесса методами математической статистики. Создан прототип цифрового двойника процесса, реализованной на микроконтроллере.Результаты. Создана прогностическая нейросетевая модель и алгоритм ее построения для процесса алкилирования бензола пропиленом, позволяющая при загрузке ее в микроконтроллер обеспечить в режиме реального времени определение экономической эффективности работы установки и автоматическую оптимизацию работы установки в зависимости от состава поступающего сырья технологического режима системы, температурного режима проведения процесса и давления в реакторе.Выводы. Созданная модель сложной химико-технологической системы производства кумола, откалиброванная на основании промышленных данных длительного пробега технологической установки и результатов расчетов выходных параметров процесса при помощи нейронной сети, реализованной на микроконтроллере, позволяет рассчитать параметры технологического процесса алкилирования (выход продуктов реакции, энергетические затраты, условную прибыль при выпуске готовой продукции). В процессе разработки прототипа программно-аппаратного комплекса управления установкой алкилирования бензола пропиленом на основе данных, адаптированных к работе реальной установки, были выявлены и сформулированы принципы и этапы создания цифрового двойника производственных систем отраслей химической технологии

    Mathematical programming for piecewise linear regression analysis

    Get PDF
    In data mining, regression analysis is a computational tool that predicts continuous output variables from a number of independent input variables, by approximating their complex inner relationship. A large number of methods have been successfully proposed, based on various methodologies, including linear regression, support vector regression, neural network, piece-wise regression, etc. In terms of piece-wise regression, the existing methods in literature are usually restricted to problems of very small scale, due to their inherent non-linear nature. In this work, a more efficient piece-wise linear regression method is introduced based on a novel integer linear programming formulation. The proposed method partitions one input variable into multiple mutually exclusive segments, and fits one multivariate linear regression function per segment to minimise the total absolute error. Assuming both the single partition feature and the number of regions are known, the mixed integer linear model is proposed to simultaneously determine the locations of multiple break-points and regression coefficients for each segment. Furthermore, an efficient heuristic procedure is presented to identify the key partition feature and final number of break-points. 7 real world problems covering several application domains have been used to demonstrate the efficiency of our proposed method. It is shown that our proposed piece-wise regression method can be solved to global optimality for datasets of thousands samples, which also consistently achieves higher prediction accuracy than a number of state-of-the-art regression methods. Another advantage of the proposed method is that the learned model can be conveniently expressed as a small number of if-then rules that are easily interpretable. Overall, this work proposes an efficient rule-based multivariate regression method based on piece-wise functions and achieves better prediction performance than state-of-the-arts approaches. This novel method can benefit expert systems in various applications by automatically acquiring knowledge from databases to improve the quality of knowledge base

    Novel analysis and modelling methodologies applied to pultrusion and other processes

    Get PDF
    Often a manufacturing process may be a bottleneck or critical to a business. This thesis focuses on the analysis and modelling of such processest, to both better understand them, and to support the enhancement of quality or output capability of the process. The main thrusts of this thesis therefore are: To model inter-process physics, inter-relationships, and complex processes in a manner that enables re-exploitation, re-interpretation and reuse of this knowledge and generic elements e.g. using Object Oriented (00) & Qualitative Modelling (QM) techniques. This involves the development of superior process models to capture process complexity and reuse any generic elements; To demonstrate advanced modelling and simulation techniques (e.g. Artificial Neural Networks(ANN), Rule-Based-Systems (RBS), and statistical modelling) on a number of complex manufacturing case studies; To gain a better understanding of the physics and process inter-relationships exhibited in a number of complex manufacturing processes (e.g. pultrusion, bioprocess, and logistics) using analysis and modelling. To these ends, both a novel Object Oriented Qualitative (Problem) Analysis (OOQA) methodology, and a novel Artificial Neural Network Process Modelling (ANNPM) methodology were developed and applied to a number of complex manufacturing case studies- thermoset and thermoplastic pultrusion, bioprocess reactor, and a logistics supply chain. It has been shown that these methodologies and the models developed support capture of complex process inter-relationships, enable reuse of generic elements, support effective variable selection for ANN models, and perform well as a predictor of process properties. In particular the ANN pultrusion models, using laboratory data from IKV, Aachen and Pera, Melton Mowbray, predicted product properties very well

    Neural network applications in polymerization processes

    Get PDF
    Neural networks currently play a major role in the modeling, control and optimization of polymerization processes and in polymer resin development. This paper is a brief tutorial on simple and practical procedures that can help in selecting and training neural networks and addresses complex cases where the application of neural networks has been successful in the field of polymerization.401418Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq

    An application of artificial intelligence and neural networks to in-core fuel management

    Get PDF
    This research demonstrates the feasibility of using neural backpropagation networks to perform neutronic calculations in a pressurized water reactor. The LEOPARD (Lifetime Evaluating Operations Pertinent to the Analysis of Reactor Design) code is used to generate data for training four (4) different models to relate the infinite multiplication factor, K-INF, of a fuel assembly at the end of a burnup step to the assembly local parameters. The RPM (Reload Power Mapping) code is used to generate training and testing data for three (3) different models to relate relative power distribution of fuel assemblies to the infinite multiplication factor of each assembly. Testing LEOPARD models has shown that it is not possible to utilize a general fuel assembly network to relate K-INF to the assembly domain parameters, rather a different network should be designed for each assembly type. Of the RPM models tested, the patterned network has resulted in the most accurate predictions of relative power distribution. An expert system is also designed using OPS5 to assist in the determination of core reload patterns. A computer code is written using Microsoft Excel to provide an interface between the operator and the neural network code, to construct an interaction between RPM and the user, and to develop a manual fuel shuffling capability using a graphical interface

    Applying machine learning algorithms in estimating the performance of heterogeneous, multi-component materials as oxygen carriers for chemical-looping processes

    Get PDF
    Heterogeneous, multi-component materials such as industrial tailings or by-products, along with naturally occurring materials, such as ores, have been intensively investigated as candidate oxygen carriers for chemical-looping processes. However, these materials have highly variable compositions, and this strongly influences their chemical-looping performance. Here, using machine learning techniques, we estimate the performance of heterogeneous, multi-component materials as oxygen carriers for chemical-looping. Experimental data for 19 manganese ores chosen as potential chemical-looping oxygen carriers were used to create a so-called training database. This database has been used to train several supervised artificial neural network models (ANN), which were used to predict the reactivity of the oxygen carriers with different fuels and the oxygen transfer capacity with only the knowledge of reactor bed temperature, elemental composition, and mechanical properties of the manganese ores. This novel approach explores ways of dealing with the training dataset, learning algorithms and topology of ANN models to achieve enhanced prediction precision. Stacked neural networks with a bootstrap resampling technique have been applied to achieve high precision and robustness on new input data, and the confidence intervals were used to assess the precision of these predictions. The current results indicate that the best trained ANNs can produce highly accurate predictions for both the training database and the unseen data with the high coefficient of determination (R2 = 0.94) and low mean absolute error (MAE = 0.057). We envision that the application of these ANNs and other machine learning algorithms will accelerate the development of oxygen carrying materials for a range of chemical-looping applications and offer a rapid screening tool for new potential oxygen carriers

    Machine learning and its applications in reliability analysis systems

    Get PDF
    In this thesis, we are interested in exploring some aspects of Machine Learning (ML) and its application in the Reliability Analysis systems (RAs). We begin by investigating some ML paradigms and their- techniques, go on to discuss the possible applications of ML in improving RAs performance, and lastly give guidelines of the architecture of learning RAs. Our survey of ML covers both levels of Neural Network learning and Symbolic learning. In symbolic process learning, five types of learning and their applications are discussed: rote learning, learning from instruction, learning from analogy, learning from examples, and learning from observation and discovery. The Reliability Analysis systems (RAs) presented in this thesis are mainly designed for maintaining plant safety supported by two functions: risk analysis function, i.e., failure mode effect analysis (FMEA) ; and diagnosis function, i.e., real-time fault location (RTFL). Three approaches have been discussed in creating the RAs. According to the result of our survey, we suggest currently the best design of RAs is to embed model-based RAs, i.e., MORA (as software) in a neural network based computer system (as hardware). However, there are still some improvement which can be made through the applications of Machine Learning. By implanting the 'learning element', the MORA will become learning MORA (La MORA) system, a learning Reliability Analysis system with the power of automatic knowledge acquisition and inconsistency checking, and more. To conclude our thesis, we propose an architecture of La MORA

    Development of a Data-Driven Soft Sensor for Multivariate Chemical Processes Using Concordance Correlation Coefficient Subsets Integrated with Parallel Inverse-Free Extreme Learning Machine

    Get PDF
    Nonlinearity, complexity, and technological limitations are causes of troublesome measurements in multivariate chemical processes. In order to deal with these problems, a soft sensor based on concordance correlation coefficient subsets integrated with parallel inverse-free extreme learning machine (CCCS-PIFELM) is proposed for multivariate chemical processes. In comparison to the forward propagation architecture of neural network with a single hidden layer, i.e., a traditional extreme learning machine (ELM), the CCCS-PIFELM approach has two notable points. Firstly, there are two subsets obtained through the concordance correlation coefficient (CCC) values between input and output variables. Hence, impacts of input variables on output variables can be assessed. Secondly, an inverse-free algorithm is used to reduce the computational load. In the evaluation of the prediction performance, the Tennessee Eastman (TE) benchmark process is employed as a case study to develop the CCCS-PIFELM approach for predicting product compositions. According to the simulation results, the proposed CCCS-PIFELM approach can obtain higher prediction accuracy compared to traditional approaches

    Monitoring the waste to energy plant using the latest AI methods and tools

    Get PDF
    Solid wastes for instance, municipal and industrial wastes present great environmental concerns and challenges all over the world. This has led to development of innovative waste-to-energy process technologies capable of handling different waste materials in a more sustainable and energy efficient manner. However, like in many other complex industrial process operations, waste-to-energy plants would require sophisticated process monitoring systems in order to realize very high overall plant efficiencies. Conventional data-driven statistical methods which include principal component analysis, partial least squares, multivariable linear regression and so forth, are normally applied in process monitoring. But recently, latest artificial intelligence (AI) methods in particular deep learning algorithms have demostrated remarkable performances in several important areas such as machine vision, natural language processing and pattern recognition. The new AI algorithms have gained increasing attention from the process industrial applications for instance in areas such as predictive product quality control and machine health monitoring. Moreover, the availability of big-data processing tools and cloud computing technologies further support the use of deep learning based algorithms for process monitoring. In this work, a process monitoring scheme based on the state-of-the-art artificial intelligence methods and cloud computing platforms is proposed for a waste-to-energy industrial use case. The monitoring scheme supports use of latest AI methods, laveraging big-data processing tools and taking advantage of available cloud computing platforms. Deep learning algorithms are able to describe non-linear, dynamic and high demensionality systems better than most conventional data-based process monitoring methods. Moreover, deep learning based methods are best suited for big-data analytics unlike traditional statistical machine learning methods which are less efficient. Furthermore, the proposed monitoring scheme emphasizes real-time process monitoring in addition to offline data analysis. To achieve this the monitoring scheme proposes use of big-data analytics software frameworks and tools such as Microsoft Azure stream analytics, Apache storm, Apache Spark, Hadoop and many others. The availability of open source in addition to proprietary cloud computing platforms, AI and big-data software tools, all support the realization of the proposed monitoring scheme
    corecore