365 research outputs found
Fault Diagnosis and Failure Prognostics of Lithium-ion Battery based on Least Squares Support Vector Machine and Memory Particle Filter Framework
123456A novel data driven approach is developed for fault diagnosis and remaining useful life (RUL) prognostics for lithium-ion batteries using Least Square Support Vector Machine (LS-SVM) and Memory-Particle Filter (M-PF). Unlike traditional data-driven models for capacity fault diagnosis and failure prognosis, which require multidimensional physical characteristics, the proposed algorithm uses only two variables: Energy Efficiency (EE), and Work Temperature. The aim of this novel framework is to improve the accuracy of incipient and abrupt faults diagnosis and failure prognosis. First, the LSSVM is used to generate residual signal based on capacity fade trends of the Li-ion batteries. Second, adaptive threshold model is developed based on several factors including input, output model error, disturbance, and drift parameter. The adaptive threshold is used to tackle the shortcoming of a fixed threshold. Third, the M-PF is proposed as the new method for failure prognostic to determine Remaining Useful Life (RUL). The M-PF is based on the assumption of the availability of real-time observation and historical data, where the historical failure data can be used instead of the physical failure model within the particle filter. The feasibility of the framework is validated using Li-ion battery prognostic data obtained from the National Aeronautic and Space Administration (NASA) Ames Prognostic Center of Excellence (PCoE). The experimental results show the following: (1) fewer data dimensions for the input data are required compared to traditional empirical models; (2) the proposed diagnostic approach provides an effective way of diagnosing Li-ion battery fault; (3) the proposed prognostic approach can predict the RUL of Li-ion batteries with small error, and has high prediction accuracy; and, (4) the proposed prognostic approach shows that historical failure data can be used instead of a physical failure model in the particle filter
A New Least Squares Support Vector Machines Ensemble Model for Aero Engine Performance Parameter Chaotic Prediction
Aiming at the nonlinearity, chaos, and small-sample of aero engine performance parameters data, a new ensemble model, named the least squares support vector machine (LSSVM) ensemble model with phase space reconstruction (PSR) and particle swarm optimization (PSO), is presented. First, to guarantee the diversity of individual members, different single kernel LSSVMs are selected as base predictors, and they also output the primary prediction results independently. Then, all the primary prediction results are integrated to produce the most appropriate prediction results by another particular LSSVM—a multiple kernel LSSVM, which reduces the dependence of modeling accuracy on kernel function and parameters. Phase space reconstruction theory is applied to extract the chaotic characteristic of input data source and reconstruct the data sample, and particle swarm optimization algorithm is used to obtain the best LSSVM individual members. A case study is employed to verify the effectiveness of presented model with real operation data of aero engine. The results show that prediction accuracy of the proposed model improves obviously compared with other three models
The role of artificial intelligence-driven soft sensors in advanced sustainable process industries: a critical review
With the predicted depletion of natural resources and alarming environmental issues, sustainable development has become a popular as well as a much-needed concept in modern process industries. Hence, manufacturers are quite keen on adopting novel process monitoring techniques to enhance product quality and process efficiency while minimizing possible adverse environmental impacts. Hardware sensors are employed in process industries to aid process monitoring and control, but they are associated with many limitations such as disturbances to the process flow, measurement delays, frequent need for maintenance, and high capital costs. As a result, soft sensors have become an attractive alternative for predicting quality-related parameters that are ‘hard-to-measure’ using hardware sensors. Due to their promising features over hardware counterparts, they have been employed across different process industries. This article attempts to explore the state-of-the-art artificial intelligence (Al)-driven soft sensors designed for process industries and their role in achieving the goal of sustainable development. First, a general introduction is given to soft sensors, their applications in different process industries, and their significance in achieving sustainable development goals. AI-based soft sensing algorithms are then introduced. Next, a discussion on how AI-driven soft sensors contribute toward different sustainable manufacturing strategies of process industries is provided. This is followed by a critical review of the most recent state-of-the-art AI-based soft sensors reported in the literature. Here, the use of powerful AI-based algorithms for addressing the limitations of traditional algorithms, that restrict the soft sensor performance is discussed. Finally, the challenges and limitations associated with the current soft sensor design, application, and maintenance aspects are discussed with possible future directions for designing more intelligent and smart soft sensing technologies to cater the future industrial needs
Machine Learning with Metaheuristic Algorithms for Sustainable Water Resources Management
The main aim of this book is to present various implementations of ML methods and metaheuristic algorithms to improve modelling and prediction hydrological and water resources phenomena having vital importance in water resource management
Machine learning assisted optimization with applications to diesel engine optimization with the particle swarm optimization algorithm
A novel approach to incorporating Machine Learning into optimization routines is presented. An approach which combines the benefits of ML, optimization, and meta-model searching is developed and tested on a multi-modal test problem; a modified Rastragin\u27s function. An enhanced Particle Swarm Optimization method was derived from the initial testing. Optimization of a diesel engine was carried out using the modified algorithm demonstrating an improvement of 83% compared with the unmodified PSO algorithm. Additionally, an approach to enhancing the training of ML models by leveraging Virtual Sensing as an alternative to standard multi-layer neural networks is presented. Substantial gains were made in the prediction of Particulate matter, reducing the MMSE by 50% and improving the correlation R^2 from 0.84 to 0.98. Improvements were made in models of PM, NOx, HC, CO, and Fuel Consumption using the method, while training times and convergence reliability were simultaneously improved over the traditional approach
Composition Prediction of Debutanizer Column using Neural Network
In oil refining industries, debutanizer column is one of the important unit
operations. Debutanizer column is the main column used to produce the main
product in oil refinery process. The online composition prediction of top and bottom
product of debutanizer column using neural network will be an aid to increase
product quality monitoring in oil refining industry. In this work, a single dynamic
neural network model is used in order to achieve the objective which is to generate
composition prediction online of the top and bottom product of debutanizer column.
Neural network is a computing system with several of simple and highly
interconnected processing elements that will process information using their dynamic
state response to external inputs. It is a software based sensor method or known as
“soft sensor” which is a helpful technology that utilizes software techniques to infer
the value of important but difficult-to-measure process variables from available
process variables which are requisite from physical sensor observation or lab
measurements. The neural network development and equation based model for ibutane,
i-pentane, n-butane, n-pentane and propane has been obtained. Then, these
results will be compared with proportional integral derivatives (PID) controller
design to show its supremacy over this method
Recommended from our members
An Evaluation of Performance Enhancements to Particle Swarm Optimisation on Real-World Data
Swarm Computation is a relatively new optimisation paradigm. The basic premise is to model the collective behaviour of self-organised natural phenomena such as swarms, flocks and shoals, in order to solve optimisation problems. Particle Swarm Optimisation (PSO) is a type of swarm computation inspired by bird flocks or swarms of bees by modelling their collective social influence as they search for optimal solutions.
In many real-world applications of PSO, the algorithm is used as a data pre-processor for a neural network or similar post processing system, and is often extensively modified to suit the application. The thesis introduces techniques that allow unmodified PSO to be applied successfully to a range of problems, specifically three extensions to the basic PSO algorithm: solving optimisation problems by training a hyperspatial matrix, using a hierarchy of swarms to coordinate optimisation on several data sets simultaneously, and dynamic neighbourhood selection in swarms.
Rather than working directly with candidate solutions to an optimisation problem, the PSO algorithm is adapted to train a matrix of weights, to produce a solution to the problem from the inputs. The search space is abstracted from the problem data.
A single PSO swarm optimises a single data set and has difficulties where the data set comprises disjoint parts (such as time series data for different days). To address this problem, we introduce a hierarchy of swarms, where each child swarm optimises one section of the data set whose gbest particle is a member of the swarm above in the hierarchy. The parent swarm(s) coordinate their children and encourage more exploration of the solution space. We show that hierarchical swarms of this type perform better than single swarm PSO optimisers on the disjoint data sets used.
PSO relies on interaction between particles within a neighbourhood to find good solutions. In many PSO variants, possible interactions are arbitrary and fixed on initialisation. Our third contribution is a dynamic neighbourhood selection: particles can modify their neighbourhood, based on the success of the candidate neighbour particle. As PSO is intended to reflect the social interaction of agents, this change significantly increases the ability of the swarm to find optimal solutions. Applied to real-world medical and cosmological data, this modification is and shows improvements over standard PSO approaches with fixed neighbourhoods
Prediction of dyslipidemia using gene mutations, family history of diseases and anthropometric indicators in children and adolescents: The CASPIAN-III study
Dyslipidemia, the disorder of lipoprotein metabolism resulting in high lipid profile, is an important modifiable risk factor for coronary heart diseases. It is associated with more than four million worldwide deaths per year. Half of the children with dyslipidemia have hyperlipidemia during adulthood, and its prediction and screening are thus critical. We designed a new dyslipidemia diagnosis system. The sample size of 725 subjects (age 14.66¿±¿2.61 years; 48% male; dyslipidemia prevalence of 42%) was selected by multistage random cluster sampling in Iran. Single nucleotide polymorphisms (rs1801177, rs708272, rs320, rs328, rs2066718, rs2230808, rs5880, rs5128, rs2893157, rs662799, and Apolipoprotein-E2/E3/E4), and anthropometric, life-style attributes, and family history of diseases were analyzed. A framework for classifying mixed-type data in imbalanced datasets was proposed. It included internal feature mapping and selection, re-sampling, optimized group method of data handling using convex and stochastic optimizations, a new cost function for imbalanced data and an internal validation. Its performance was assessed using hold-out and 4-foldcross-validation. Four other classifiers namely as supported vector machines, decision tree, and multilayer perceptron neural network and multiple logistic regression were also used. The average sensitivity, specificity, precision and accuracy of the proposed system were 93%, 94%, 94% and 92%, respectively in cross validation. It significantly outperformed the other classifiers and also showed excellent agreement and high correlation with the gold standard. A non-invasive economical version of the algorithm was also implemented suitable for low- and middle-income countries. It is thus a promising new tool for the prediction of dyslipidemiaPeer ReviewedPostprint (published version
Application of soft computing models with input vectors of snow cover area in addition to hydro-climatic data to predict the sediment loads
The accurate estimate of sediment load is important for management of the river ecosystem, designing of water infrastructures, and planning of reservoir operations. The direct measurement of sediment is the most credible method to estimate the sediments. However, this requires a lot of time and resources. Because of these two constraints, most often, it is not possible to continuously measure the daily sediments for most of the gauging sites. Nowadays, data-based sediment prediction models are famous for bridging the data gaps in the estimation of sediment loads. In data-driven sediment predictions models, the selection of input vectors is critical in determining the best structure of models for the accurate estimation of sediment yields. In this study, time series inputs of snow cover area, basin effective rainfall, mean basin average temperature, and mean basin evapotranspiration in addition to the flows were assessed for the prediction of sediment loads. The input vectors were assessed with artificial neural network (ANN), adaptive neuro-fuzzy logic inference system with grid partition (ANFIS-GP), adaptive neuro-fuzzy logic inference system with subtractive clustering (ANFIS-SC), adaptive neuro-fuzzy logic inference system with fuzzy c-means clustering (ANFIS-FCM), multiple adaptive regression splines (MARS), and sediment rating curve (SRC) models for the Gilgit River, the tributary of the Indus River in Pakistan. The comparison of different input vectors showed improvements in the prediction of sediments by using the snow cover area in addition to flows, effective rainfall, temperature, and evapotranspiration. Overall, the ANN model performed better than all other models. However, as regards sediment load peak time series, the sediment loads predicted using the ANN, ANFIS-FCM, and MARS models were found to be closer to the measured sediment loads. The ANFIS-FCM performed better in the estimation of peak sediment yields with a relative accuracy of 81.31% in comparison to the ANN and MARS models with 80.17% and 80.16% of relative accuracies, respectively. The developed multiple linear regression equation of all models show an R value of 0.85 and 0.74 during the training and testing period, respectively
A sustainable ultrafiltration of sub-20 nm nanoparticles in water and isopropanol: experiments, theory and machine learning
This research focused on ultrafiltration (UF) for particles down to 2 nm against membranes with larger pore size in water and IPA, which has the potential to save up to 90% of energy. This study developed electrospray (ES) - scanning mobility particle sizer (SMPS) method to fast and effective measure retention efficiencies for small particles (ZnS, Au and PSL) on polytetrafluoroethylene (PTFE), polyvinylidene fluoride (PVDF) and polycarbonate (PCTE) in different liquids. Theoretical models that could quantitatively explain the experimental results for small particles in medium-polarity organic solvents were also developed. Results showed that the highest efficiency was up to ~80% with 10 nm Au nanoparticle challenged on 100 nm rated PTFE, which demonstrated the feasibility of the proposed sustainable UF. The theoretical models were validated by experimental results and indicated that a higher efficiency was possible by enhancing material properties of membranes, particles, or liquids. Therefore, optimization on filtration condition was performed. A hybrid artificial neural network (ANN) and particle swarm optimization algorithm (PSO) models was firstly applied in this case. The dataset includes all the experimental results and some additional calculated retention efficiencies. Optimization parameters include membrane zeta potential, pore size, particle size, particle zeta potential, and Hamaker constant. The ANN model provided highly correlated predicted values with target values. The PSO model showed that a filtration efficiency of 99.9% could be achieved by using a 52.2 nm filter with a -20.3 mV zeta potential, 5.5 nm nanoparticles with a 41.4 mV zeta potential, and a combined Hamaker constan
- …