61 research outputs found

    Computationally Inexpensive 1D-CNN for the Prediction of Noisy Data of NOx Emissions From 500 MW Coal-Fired Power Plant

    Get PDF
    Coal-fired power plants have been used to meet the energy requirements in countries where coal reserves are abundant and are the key source of NOx emissions. Owing to the serious environmental and health concerns associated with NOx emissions, much work has been carried out to reduce NOx emissions. Sophisticated artificial intelligence (AI) techniques have been employed during the past few decades, such as least-squares support vector machine (LSSVM), artificial neural networks (ANN), long short-term memory (LSTM), and gated recurrent unit (GRU), to develop the NOx prediction model. Several studies have investigated deep neural networks (DNN) models for accurate NOx emission prediction. However, there is a need to investigate a DNN-based NOx prediction model that is accurate and computationally inexpensive. Recently, a new AI technique, convolutional neural network (CNN), has been introduced and proven superior for image class prediction accuracy. According to the best of the author’s knowledge, not much work has been done on the utilization of CNN on NOx emissions from coal-fired power plants. Therefore, this study investigated the prediction performance and computational time of one-dimensional CNN (1D-CNN) on NOx emissions data from a 500 MW coal-fired power plant. The variations of hyperparameters of LSTM, GRU, and 1D-CNN were investigated, and the performance metrics such as RMSE and computational time were recorded to obtain optimal hyperparameters. The obtained optimal values of hyperparameters of LSTM, GRU, and 1D-CNN were then employed for models’ development, and consequently, the models were tested on test data. The 1D-CNN NOx emission model improved the training efficiency in terms of RMSE by 70.6% and 60.1% compared to LSTM and GRU, respectively. Furthermore, the testing efficiency for 1D-CNN improved by 10.2% and 15.7% compared to LSTM and GRU, respectively. Moreover, 1D-CNN (26 s) reduced the training time by 83.8% and 50% compared to LSTM (160 s) and GRU (52 s), respectively. Results reveal that 1D-CNN is more accurate, more stable, and computationally inexpensive compared to LSTM and GRU on NOx emission data from the 500 MW power plant

    An ensemble deep learning model for exhaust emissions prediction of heavy oil-fired boiler combustion

    Get PDF
    Accurate and reliable prediction of exhaust emissions is crucial for combustion optimization control and environmental protection. This study proposes a novel ensemble deep learning model for exhaust emissions (NOx and CO2) prediction. In this ensemble learning model, the stacked denoising autoencoder is established to extract the deep features of flame images. Four forecasting engines include artificial neural network, extreme learning machine, support vector machine and least squares support vector machine are employed for preliminary prediction of NOx and CO2 emissions based on the extracted image deep features. After that, these preliminary predictions are combined by Gaussian process regression in a nonlinear manner to achieve a final prediction of the emissions. The effectiveness of the proposed ensemble deep learning model is evaluated through 4.2Ă‚ MW heavy oil-fired boiler flame images. Experimental results suggest that the predictions are achieved from the four forecasting engines are inconsistent, however, an accurate prediction accuracy has been achieved through the proposed model. The proposed ensemble deep learning model not only provides accurate point prediction but also generates satisfactory confidence interval

    Prediction of combustion state through a semi-supervised learning model and flame imaging

    Get PDF
    Accurate prediction of combustion state is crucial for an in-depth understanding of furnace performance and optimize operation conditions. Traditional data-driven approaches such as artificial neural networks and support vector machine incorporate distinct features which require prior knowledge for feature extraction and suffers poor generalization for unseen combustion states. Therefore, it is necessary to develop an advanced and accurate prediction model to resolve these limitations. This study presents a novel semi-supervised learning model integrating denoising autoencoder (DAE), generative adversarial network (GAN) and Gaussian process classifier (GPC). The DAE network is established to extract representative features of flame images and the network trained through the adversarial learning mechanism of the GAN. Structural similarity (SSIM) metric is introduced as a novel loss function to improve the feature learning ability of the DAE network. The extracted features are then fed into the GPC to predict the seen and unseen combustion states. The effectiveness of the proposed semi-supervised learning model, i.e., DAE-GAN-GPC was evaluated through 4.2 MW heavy oil-fired boiler furnace flame images captured under different combustion states. The averaged prediction accuracy of 99.83% was achieved for the seen combustion states. The new states (unseen) were predicted accurately through the proposed model by fine-tuning of GPC without retraining the DAE-GAN and averaged prediction accuracy of 98.36% was achieved for the unseen states. A comparative study was also carried out with other deep neural networks and classifiers. Results suggested that the proposed model provides better prediction accuracy and robustness capability compared to other traditional prediction models

    Dynamic energy system modeling using hybrid physics-based and machine learning encoder–decoder models

    Get PDF
    Three model configurations are presented for multi-step time series predictions of the heat absorbed by the water and steam in a thermal power plant. The models predict over horizons of 2, 4, and 6 steps into the future, where each step is a 5-minute increment. The evaluated models are a pure machine learning model, a novel hybrid machine learning and physics-based model, and the hybrid model with an incomplete dataset. The hybrid model deconstructs the machine learning into individual boiler heat absorption units: economizer, water wall, superheater, and reheater. Each configuration uses a gated recurrent unit (GRU) or a GRU-based encoder–decoder as the deep learning architecture. Mean squared error is used to evaluate the models compared to target values. The encoder–decoder architecture is over 11% more accurate than the GRU only models. The hybrid model with the incomplete dataset highlights the importance of the manipulated variables to the system. The hybrid model, compared to the pure machine learning model, is over 10% more accurate on average over 20 iterations of each model. Automatic differentiation is applied to the hybrid model to perform a local sensitivity analysis to identify the most impactful of the 72 manipulated variables on the heat absorbed in the boiler. The models and sensitivity analyses are used in a discussion about optimizing the thermal power plant

    Nature-Inspired Topology Optimization of Recurrent Neural Networks

    Get PDF
    Hand-crafting effective and efficient structures for recurrent neural networks (RNNs) is a difficult, expensive, and time-consuming process. To address this challenge, this work presents three nature-inspired (NI) algorithms for neural architecture search (NAS), introducing the subfield of nature-inspired neural architecture search (NI-NAS). These algorithms, based on ant colony optimization (ACO), progress from memory cell structure optimization, to bounded discrete-space architecture optimization, and finally to unbounded continuous-space architecture optimization. These methods were applied to real-world data sets representing challenging engineering problems, such as data from a coal-fired power plant, wind-turbine power generators, and aircraft flight data recorder (FDR) data. Initial work utilized ACO to select optimal connections inside recurrent long short-term memory (LSTM) cell structures. Viewing each LSTM cell as a graph, ants would choose potential input and output connections based on the pheromones previously laid down over those connections as done in a standard ACO search. However, this approach did not optimize the overall network of the RNN, particularly its synaptic parameters. I addressed this issue by introducing the Ant-based Neural Topology Search (ANTS) algorithm to directly optimize the entire RNN topology. ANTS utilizes a discrete-space superstructure representing a completely connected RNN where each node is connected to every other node, forming an extremely dense mesh of edges and recurrent edges. ANTS can select from a library of modern RNN memory cells. ACO agents (ants), in this thesis, build RNNs from the superstructure determined by pheromones laid out on the superstructure\u27s connections. Backpropagation is then used to train the generated RNNs in an asynchronous parallel computing design to accelerate the optimization process. The pheromone update depends on the evaluation of the tested RNN against a population of best performing RNNs. Several variations of the core algorithm was investigated to test several designed heuristics for ANTS and evaluate their efficacy in the formation of sparser synaptic connectivity patterns. This was done primarily by formulating different functions that drive the underlying pheromone simulation process as well as by introducing ant agents with 3 specialized roles (inspired by real-world ants) to construct the RNN structure. This characterization of the agents enables ants to focus on specific structure building roles. ``Communal intelligence\u27\u27 was also incorporated, where the best set of weights was across locally-trained RNN candidates for weight initialization, reducing the number of backpropagation epochs required to train each candidate RNN and speeding up the overall search process. However, the growth of the superstructure increased by an order of magnitude, as more input and deeper structures are utilized, proving to be one limitation of the proposed procedure. The limitation of ANTS motivated the development of the continuous ANTS algorithm (CANTS), which works with a continuous search space for any fixed network topology. In this process, ants moving within a (temporally-arranged) set of continuous/real-valued planes based on proximity and density of pheromone placements. The motion of the ants over these continuous planes, in a sense, more closely mimicks how actual ants move in the real world. Ants traverse a 3-dimensional space from the inputs to the outputs and across time lags. This continuous search space frees the ant agents from the limitations imposed by ANTS\u27 discrete massively connected superstructure, making the structural options unbounded when mapping the movements of ants through the 3D continuous space to a neural architecture graph. In addition, CANTS has fewer hyperparameters to tune than ANTS, which had five potential heuristic components that each had their own unique set of hyperparameters, as well as requiring the user to define the maximum recurrent depth, number of layers and nodes within each layer. CANTS only requires specifying the number ants and their pheromone sensing radius. The three applied strategies yielded three important successes. Applying ACO on optimizing LSTMs yielded a 1.34\% performance enhancement and more than 55% sparser structures (which is useful for speeding up inference). ANTS outperformed the NAS benchmark, NEAT, and the NAS state-of-the-art algorithm, EXAMM. CANTS showed competitive results to EXAMM and competed with ANTS while offering sparser structures, offering a promising path forward for optimizing (temporal) neural models with nature-inspired metaheuristics based the metaphor of ants

    An Intelligent Monitoring Interface for a Coal-Fired Power Plant Boiler Trips

    Get PDF
    A power plant monitoring system embedded with artificial intelligence can enhance its effectiveness by reducing the time spent in trip analysis and follow up procedures. Experimental results showed that Multilayered perceptron neural network trained with Levenberg-Marquardt (LM) algorithm achieved the least mean squared error of 0.0223 with the misclassification rate of 7.435% for the 10 simulated trip prediction. The proposed method can identify abnormality of operational parameters at the confident level of ±6.3%

    Application of probabilistic deep learning models to simulate thermal power plant processes

    Get PDF
    Deep learning has gained traction in thermal engineering due to its applications to process simulations, the deeper insights it can provide and its abilities to circumvent the shortcomings of classic thermodynamic simulation approaches by capturing complex inter-dependencies. This works sets out to apply probabilistic deep learning to power plant operations using historic plant data. The first study presented, entails the development of a steady-state mixture density network (MDN) capable of predicting effective heat transfer coefficients (HTC) for the various heat exchanger components inside a utility scale boiler. Selected directly controllable input features, including the excess air ratio, steam temperatures, flow rates and pressures are used to predict the HTCs. In the second case study, an encoder-decoder mixturedensity network (MDN) is developed using recurrent neural networks (RNN) for the prediction of utility-scale air-cooled condenser (ACC) backpressure. The effects of ambient conditions and plant operating parameters, such as extraction flow rate, on ACC performance is investigated. In both case studies, hyperparameter searches are done to determine the best performing architectures for these models. Comparisons are drawn between the MDN model versus standard model architecture in both case studies. The HTC predictor model achieved 90% accuracy which equates to an average error of 4.89 W m2K across all heat exchangers. The resultant time-series ACC model achieved an average error of 3.14 kPa, which translate into a model accuracy of 82%
    • …
    corecore