2,640 research outputs found

    Process Data Analytics Using Deep Learning Techniques

    Get PDF
    In chemical manufacturing plants, numerous types of data are accessible, which could be process operational data (historical or real-time), process design and product quality data, economic and environmental (including process safety, waste emission and health impact) data. Effective knowledge extraction from raw data has always been a very challenging task, especially the data needed for a type of study is huge. Other characteristics of process data such as noise, dynamics, and highly correlated process parameters make this more challenging. In this study, we introduce an attention-based RNN for multi-step-ahead prediction that can have applications in model predictive control, fault diagnosis, etc. This model consists of an RNN that encodes a sequence of input time series data into a new representation (called context vector) and another RNN that decodes the representation into output target sequence. An attention model integrated to the encoder-decoder RNN model allows the network to focus on parts of the input sequence that are relevant to predicting the target sequence. The attention model is jointly trained with all other components of the model. By having a deep architecture, the model can learn a very complex dynamic system, and it is robust to noise. In order to show the effectiveness of the proposed approach, we perform a comparative study on the problem of catalyst activity prediction, by using conventional machine learning techniques such as Support Vector Regression (SVR)

    Estimating soot emission in diesel engines using gated recurrent unit networks

    Get PDF
    In this paper, a new data-driven modeling of a diesel engine soot emission formation using gated recurrent unit (GRU) networks is proposed. Different from the traditional time series prediction methods such as nonlinear autoregressive with exogenous input (NARX) approach, GRU structure does not require the determination of the pure time delay between the inputs and the output, and the number of regressors does not have to be chosen beforehand. Gates in a GRU network enable to capture such dependencies on the past input values without any prior knowledge. As a design of experiment, 30 different points in engine speed - injected fuel quantity plane are determined and the rest of the input channels, i.e., rail pressure, main start of injection, equivalence ratio, and intake oxygen concentration are excited with chirp signals in the intended regions of operation. Experimental results show that the prediction performances of GRU based soot models are quite satisfactory with 77% training and 57% validation fit accuracies and normalized root mean square error (NRMSE) values are less than 0.038 and 0.069, respectively. GRU soot models surpass the traditional NARX based soot models in both steady-state and transient cycles

    Spatial-temporal prediction of air quality based on recurrent neural networks

    Get PDF
    To predict air quality (PM2.5 concentrations, et al), many parametric regression models have been developed, while deep learning algorithms are used less often. And few of them takes the air pollution emission or spatial information into consideration or predict them in hour scale. In this paper, we proposed a spatial-temporal GRU-based prediction framework incorporating ground pollution monitoring (GPM), factory emissions (FE), surface meteorology monitoring (SMM) variables to predict hourly PM2.5 concentrations. The dataset for empirical experiments was built based on air quality monitoring in Shenyang, China. Experimental results indicate that our method enables more accurate predictions than all baseline models and by applying the convolutional processing to the GPM and FE variables notable improvement can be achieved in prediction accuracy

    A comparison between Recurrent Neural Networks and classical machine learning approaches In Laser induced breakdown spectroscopy

    Full text link
    Recurrent Neural Networks are classes of Artificial Neural Networks that establish connections between different nodes form a directed or undirected graph for temporal dynamical analysis. In this research, the laser induced breakdown spectroscopy (LIBS) technique is used for quantitative analysis of aluminum alloys by different Recurrent Neural Network (RNN) architecture. The fundamental harmonic (1064 nm) of a nanosecond Nd:YAG laser pulse is employed to generate the LIBS plasma for the prediction of constituent concentrations of the aluminum standard samples. Here, Recurrent Neural Networks based on different networks, such as Long Short Term Memory (LSTM), Gated Recurrent Unit (GRU), Simple Recurrent Neural Network (Simple RNN), and as well as Recurrent Convolutional Networks comprising of Conv-SimpleRNN, Conv-LSTM and Conv-GRU are utilized for concentration prediction. Then a comparison is performed among prediction by classical machine learning methods of support vector regressor (SVR), the Multi Layer Perceptron (MLP), Decision Tree algorithm, Gradient Boosting Regression (GBR), Random Forest Regression (RFR), Linear Regression, and k-Nearest Neighbor (KNN) algorithm. Results showed that the machine learning tools based on Convolutional Recurrent Networks had the best efficiencies in prediction of the most of the elements among other multivariate methods

    Carbon Nanotube Gas Sensor Using Neural Networks

    Get PDF
    The need to identify the presence and quantify the concentrations of gases and vapors is ubiquitous in NASA missions and societal applications. Sensors for air quality monitoring in crew cabins and ISS have been actively under development (Ref. 1). In particular, measuring the concentration of CO2 and NH3 is important because high concentrations of these gases pose a risk to ISS crew health. Detection of fuel and oxidant leaks in crew vehicles is critical for ensuring mission safety. Accurate gas and vapor concentrations can be measured, but this typically requires bulky and expensive instrumentation. Recently, inexpensive sensors with low power demands have been fabricated for use on the International Space Station (ISS). Carbon Nanotube (CNT) based chemical sensors are one type of these sensors. CNT sensors meet the requirements for low cost and ease of fabrication for deployment on the ISS. However, converting the measured signal from the sensors to human readable indicators of atmospheric air quality and safety is challenging. This is because it is difficult to develop an analytical model that maps the CNT sensor output signal to gas concentration. Training a neural network on CNT sensor data to predict gas concentration is more effective than developing an analytic approach to calculate the concentration from the same data set. With this in mind a neural network was created to tackle this challenge of converting the measured signal into CO2 and NH3 concentration values

    An Optimal Stacked Ensemble Deep Learning Model for Predicting Time-Series Data Using a Genetic Algorithm—An Application for Aerosol Particle Number Concentrations

    Get PDF
    Time-series prediction is an important area that inspires numerous research disciplines for various applications, including air quality databases. Developing a robust and accurate model for time-series data becomes a challenging task, because it involves training different models and optimization. In this paper, we proposed and tested three machine learning techniques—recurrent neural networks (RNN), heuristic algorithm and ensemble learning—to develop a predictive model for estimating atmospheric particle number concentrations in the form of a time-series database. Here, the RNN included three variants—Long-Short Term Memory, Gated Recurrent Network, and Bi-directional Recurrent Neural Network—with various configurations. A Genetic Algorithm (GA) was then used to find the optimal time-lag in order to enhance the model’s performance. The optimized models were used to construct a stacked ensemble model as well as to perform the final prediction. The results demonstrated that the time-lag value can be optimized by using the heuristic algorithm; consequently, this improved the model prediction accuracy. Further improvement can be achieved by using ensemble learning that combines several models for better performance and more accurate predictions
    corecore