26 research outputs found

    An Efficient Model for Data Classification Based on SVM Grid Parameter Optimization and PSO Feature Weight Selection

    Get PDF
    The support vector machine (SVM) is a classifier with different applications due to its perfect experimental performance compared to other machine learning algorithms. It has been used mostly in pattern recognition, fault diagnosis, and text categorization. The performance of SVM is extremely dependent on the sufficient setting of its parameters such as SVM max-iteration and SVM kernel-type. Therefore, the choice of suitable initial parameters of SVM will result in a good performance and classification result. This paper introduces a new schema for optimizing SVM parameters using grid search and particle swarm optimization PSO feature weighting. The experimental results demonstrate that the new method obtained a high accuracy compared to the traditional SVM and other SVM-optimization methods

    Predictive Maintenance of an External Gear Pump using Machine Learning Algorithms

    Get PDF
    The importance of Predictive Maintenance is critical for engineering industries, such as manufacturing, aerospace and energy. Unexpected failures cause unpredictable downtime, which can be disruptive and high costs due to reduced productivity. This forces industries to ensure the reliability of their equip-ment. In order to increase the reliability of equipment, maintenance actions, such as repairs, replacements, equipment updates, and corrective actions are employed. These actions affect the flexibility, quality of operation and manu-facturing time. It is therefore essential to plan maintenance before failure occurs.Traditional maintenance techniques rely on checks conducted routinely based on running hours of the machine. The drawback of this approach is that maintenance is sometimes performed before it is required. Therefore, conducting maintenance based on the actual condition of the equipment is the optimal solu-tion. This requires collecting real-time data on the condition of the equipment, using sensors (to detect events and send information to computer processor).Predictive Maintenance uses these types of techniques or analytics to inform about the current, and future state of the equipment. In the last decade, with the introduction of the Internet of Things (IoT), Machine Learning (ML), cloud computing and Big Data Analytics, manufacturing industry has moved forward towards implementing Predictive Maintenance, resulting in increased uptime and quality control, optimisation of maintenance routes, improved worker safety and greater productivity.The present thesis describes a novel computational strategy of Predictive Maintenance (fault diagnosis and fault prognosis) with ML and Deep Learning applications for an FG304 series external gear pump, also known as a domino pump. In the absence of a comprehensive set of experimental data, synthetic data generation techniques are implemented for Predictive Maintenance by perturbing the frequency content of time series generated using High-Fidelity computational techniques. In addition, various types of feature extraction methods considered to extract most discriminatory informations from the data. For fault diagnosis, three types of ML classification algorithms are employed, namely Multilayer Perceptron (MLP), Support Vector Machine (SVM) and Naive Bayes (NB) algorithms. For prognosis, ML regression algorithms, such as MLP and SVM, are utilised. Although significant work has been reported by previous authors, it remains difficult to optimise the choice of hyper-parameters (important parameters whose value is used to control the learning process) for each specific ML algorithm. For instance, the type of SVM kernel function or the selection of the MLP activation function and the optimum number of hidden layers (and neurons).It is widely understood that the reliability of ML algorithms is strongly depen-dent upon the existence of a sufficiently large quantity of high-quality training data. In the present thesis, due to the unavailability of experimental data, a novel high-fidelity in-silico dataset is generated via a Computational Fluid Dynamic (CFD) model, which has been used for the training of the underlying ML metamodel. In addition, a large number of scenarios are recreated, ranging from healthy to faulty ones (e.g. clogging, radial gap variations, axial gap variations, viscosity variations, speed variations). Furthermore, the high-fidelity dataset is re-enacted by using degradation functions to predict the remaining useful life (fault prognosis) of an external gear pump.The thesis explores and compares the performance of MLP, SVM and NB algo-rithms for fault diagnosis and MLP and SVM for fault prognosis. In order to enable fast training and reliable testing of the MLP algorithm, some predefined network architectures, like 2n neurons per hidden layer, are used to speed up the identification of the precise number of neurons (shown to be useful when the sample data set is sufficiently large). Finally, a series of benchmark tests are presented, enabling to conclude that for fault diagnosis, the use of wavelet features and a MLP algorithm can provide the best accuracy, and the MLP al-gorithm provides the best prediction results for fault prognosis. In addition, benchmark examples are simulated to demonstrate the mesh convergence for the CFD model whereas, quantification analysis and noise influence on training data are performed for ML algorithms

    Uncertainty quantification of a radiative transfer model and a machine learning technique for use as observation operators in the assimilation of microwave observations into a land surface model to improve soil moisture and terrestrial snow

    Get PDF
    Soil moisture and terrestrial snow mass are two important hydrological states needed to accurately quantify terrestrial water storage and streamflow. Soil moisture and terrestrial snow mass can be measured using ground-based instrument networks, estimated using advanced land surface models, and retrieved via satellite imagery. However, each method has its own inherent sources of error and uncertainty. This leads to the application of data assimilation to obtain optimal estimates of soil moisture and snow mass. Before conducting data assimilation (DA) experiments, this dissertation explored the use of two different observation operators within a DA framework: a L-band radiative transfer model (RTM) for soil moisture and support vector machine (SVM) regression for soil terrestrial snow mass. First, L-band brightness temperature (Tb) estimated from the RTM after being calibrated against multi-angular SMOS Tb's showed good performance in both ascending and descending overpasses across North America except in regions with sub-grid scale lakes and dense forest. Detailed analysis of RTM-derived L-band Tb in terms of soil hydraulic parameters and vegetation types suggests the need for further improvement of RTM-derived Tb in regions with relatively large porosity, large wilting point, or grassland type vegetation. Secondly, a SVM regression technique was developed with explicit consideration of the first-order physics of photon scattering as a function of different training target sets, training window lengths, and delineation of snow wetness over snow-covered terrain. The overall results revealed that prediction accuracy of the SVM was strongly linked with the first-order physics of electromagnetic responses of different snow conditions. After careful evaluation of the observation operators, C-band backscatter observations over Western Colorado collected by Sentinel-1 were merged into an advanced land surface model using a SVM and a one-dimensional ensemble Kalman filter. In general, updated snow mass estimates using the Sentinel-1 DA framework showed modest improvements in comparison to ground-based measurements of snow water equivalent (SWE) and snow depth. These results motivate further application of the outlined assimilation schemes over larger regions in order to improve the characterization of the terrestrial hydrological cycle

    Dynamic Data Assimilation

    Get PDF
    Data assimilation is a process of fusing data with a model for the singular purpose of estimating unknown variables. It can be used, for example, to predict the evolution of the atmosphere at a given point and time. This book examines data assimilation methods including Kalman filtering, artificial intelligence, neural networks, machine learning, and cognitive computing

    Geospatial Artificial Intelligence (GeoAI) in the Integrated Hydrological and Fluvial Systems Modeling: Review of Current Applications and Trends

    Get PDF
    This paper reviews the current GeoAI and machine learning applications in hydrological and hydraulic modeling, hydrological optimization problems, water quality modeling, and fluvial geomorphic and morphodynamic mapping. GeoAI effectively harnesses the vast amount of spatial and non-spatial data collected with the new automatic technologies. The fast development of GeoAI provides multiple methods and techniques, although it also makes comparisons between different methods challenging. Overall, selecting a particular GeoAI method depends on the application's objective, data availability, and user expertise. GeoAI has shown advantages in non-linear modeling, computational efficiency, integration of multiple data sources, high accurate prediction capability, and the unraveling of new hydrological patterns and processes. A major drawback in most GeoAI models is the adequate model setting and low physical interpretability, explainability, and model generalization. The most recent research on hydrological GeoAI has focused on integrating the physical-based models' principles with the GeoAI methods and on the progress towards autonomous prediction and forecasting systems

    Machine Learning and Its Application to Reacting Flows

    Get PDF
    This open access book introduces and explains machine learning (ML) algorithms and techniques developed for statistical inferences on a complex process or system and their applications to simulations of chemically reacting turbulent flows. These two fields, ML and turbulent combustion, have large body of work and knowledge on their own, and this book brings them together and explain the complexities and challenges involved in applying ML techniques to simulate and study reacting flows. This is important as to the world’s total primary energy supply (TPES), since more than 90% of this supply is through combustion technologies and the non-negligible effects of combustion on environment. Although alternative technologies based on renewable energies are coming up, their shares for the TPES is are less than 5% currently and one needs a complete paradigm shift to replace combustion sources. Whether this is practical or not is entirely a different question, and an answer to this question depends on the respondent. However, a pragmatic analysis suggests that the combustion share to TPES is likely to be more than 70% even by 2070. Hence, it will be prudent to take advantage of ML techniques to improve combustion sciences and technologies so that efficient and “greener” combustion systems that are friendlier to the environment can be designed. The book covers the current state of the art in these two topics and outlines the challenges involved, merits and drawbacks of using ML for turbulent combustion simulations including avenues which can be explored to overcome the challenges. The required mathematical equations and backgrounds are discussed with ample references for readers to find further detail if they wish. This book is unique since there is not any book with similar coverage of topics, ranging from big data analysis and machine learning algorithm to their applications for combustion science and system design for energy generation

    Evaluating and developing parameter optimization and uncertainty analysis methods for a computationally intensive distributed hydrological model

    Get PDF
    This study focuses on developing and evaluating efficient and effective parameter calibration and uncertainty methods for hydrologic modeling. Five single objective optimization algorithms and six multi-objective optimization algorithms were tested for automatic parameter calibration of the SWAT model. A new multi-objective optimization method (Multi-objective Particle Swarm and Optimization & Genetic Algorithms) that combines the strengths of different optimization algorithms was proposed. Based on the evaluation of the performances of different algorithms on three test cases, the new method consistently performed better than or close to the other algorithms. In order to save efforts of running the computationally intensive SWAT model, support vector machine (SVM) was used as a surrogate to approximate the behavior of SWAT. It was illustrated that combining SVM with Particle Swarm and Optimization can save efforts for parameter calibration of SWAT. Further, SVM was used as a surrogate to implement parameter uncertainty analysis fo SWAT. The results show that SVM helped save more than 50% of runs of the computationally intensive SWAT model The effect of model structure on the uncertainty estimation of streamflow simulation was examined through applying SWAT and Neural Network models. The 95% uncertainty intervals estimated by SWAT only include 20% of the observed data, while Neural Networks include more than 70%. This indicates the model structure is an important source of uncertainty of hydrologic modeling and needs to be evaluated carefully. Further exploitation of the effect of different treatments of the uncertainties of model structures on hydrologic modeling was conducted through applying four types of Bayesian Neural Networks. By considering uncertainty associated with model structure, the Bayesian Neural Networks can provide more reasonable quantification of the uncertainty of streamflow simulation. This study stresses the need for improving understanding and quantifying methods of different uncertainty sources for effective estimation of uncertainty of hydrologic simulation

    Machine Learning and Its Application to Reacting Flows

    Get PDF
    This open access book introduces and explains machine learning (ML) algorithms and techniques developed for statistical inferences on a complex process or system and their applications to simulations of chemically reacting turbulent flows. These two fields, ML and turbulent combustion, have large body of work and knowledge on their own, and this book brings them together and explain the complexities and challenges involved in applying ML techniques to simulate and study reacting flows. This is important as to the world’s total primary energy supply (TPES), since more than 90% of this supply is through combustion technologies and the non-negligible effects of combustion on environment. Although alternative technologies based on renewable energies are coming up, their shares for the TPES is are less than 5% currently and one needs a complete paradigm shift to replace combustion sources. Whether this is practical or not is entirely a different question, and an answer to this question depends on the respondent. However, a pragmatic analysis suggests that the combustion share to TPES is likely to be more than 70% even by 2070. Hence, it will be prudent to take advantage of ML techniques to improve combustion sciences and technologies so that efficient and “greener” combustion systems that are friendlier to the environment can be designed. The book covers the current state of the art in these two topics and outlines the challenges involved, merits and drawbacks of using ML for turbulent combustion simulations including avenues which can be explored to overcome the challenges. The required mathematical equations and backgrounds are discussed with ample references for readers to find further detail if they wish. This book is unique since there is not any book with similar coverage of topics, ranging from big data analysis and machine learning algorithm to their applications for combustion science and system design for energy generation
    corecore