154 research outputs found

    Enhanced genetic algorithm-based back propagation neural network to diagnose conditions of multiple-bearing system

    Get PDF
    Condition diagnosis of critical system such as multiple-bearing system is one of the most important maintenance activities in industry because it is essential that faults are detected early before the performance of the whole system is affected. Currently, the most significant issues in condition diagnosis are how to improve accuracy and stability of accuracy, as well as lessen the complexity of the diagnosis which would reduce processing time. Researchers have developed diagnosis techniques based on metaheuristic, specifically, Back Propagation Neural Network (BPNN) for single bearing system and small numbers of condition classes. However, they are not directly applicable or effective for multiple-bearing system because the diagnosis accuracy achieved is unsatisfactory. Therefore, this research proposed hybrid techniques to improve the performance of BPNN in terms of accuracy and stability of accuracy by using Adaptive Genetic Algorithm and Back Propagation Neural Network (AGA-BPNN), and multiple BPNN with AGA-BPNN (mBPNNAGA- BPNN). These techniques are tested and validated on vibration signal data of multiple-bearing system. Experimental results showed the proposed techniques outperformed the BPPN in condition diagnosis. However, the large number of features from multiple-bearing system has affected the complexity of AGA-BPNN and mBPNN-AGA-BPNN, and significantly increased the amount of required processing time. Thus to investigate further, whether the number of features required can be reduced without compromising the diagnosis accuracy and stability, Grey Relational Analysis (GRA) was applied to determine the most dominant features in reducing the complexity of the diagnosis techniques. The experimental results showed that the hybrid of GRA and mBPNN-AGA-BPNN achieved accuracies of 99% for training, 100% for validation and 100% for testing. Besides that, the performance of the proposed hybrid accuracy increased by 11.9%, 13.5% and 11.9% in training, validation and testing respectively when compared to the standard BPNN. This hybrid has lessened the complexity which reduced nearly 55.96% of processing time. Furthermore, the hybrid has improved the stability of the accuracy whereby the differences in accuracy between the maximum and minimum values were 0.2%, 0% and 0% for training, validation and testing respectively. Hence, it can be concluded that the proposed diagnosis techniques have improved the accuracy and stability of accuracy within the minimum complexity and significantly reduced processing time

    Machine Learning in Tribology

    Get PDF
    Tribology has been and continues to be one of the most relevant fields, being present in almost all aspects of our lives. The understanding of tribology provides us with solutions for future technical challenges. At the root of all advances made so far are multitudes of precise experiments and an increasing number of advanced computer simulations across different scales and multiple physical disciplines. Based upon this sound and data-rich foundation, advanced data handling, analysis and learning methods can be developed and employed to expand existing knowledge. Therefore, modern machine learning (ML) or artificial intelligence (AI) methods provide opportunities to explore the complex processes in tribological systems and to classify or quantify their behavior in an efficient or even real-time way. Thus, their potential also goes beyond purely academic aspects into actual industrial applications. To help pave the way, this article collection aimed to present the latest research on ML or AI approaches for solving tribology-related issues generating true added value beyond just buzzwords. In this sense, this Special Issue can support researchers in identifying initial selections and best practice solutions for ML in tribology

    The Application of ANN and ANFIS Prediction Models for Thermal Error Compensation on CNC Machine Tools

    Get PDF
    Thermal errors can have significant effects on Computer Numerical Control (CNC) machine tool accuracy. The errors come from thermal deformations of the machine elements caused by heat sources within the machine structure or from ambient temperature change. The effect of temperature can be reduced by error avoidance or numerical compensation. The performance of a thermal error compensation system essentially depends upon the accuracy and robustness of the thermal error model and its input measurements. This thesis first reviews different methods of designing thermal error models, before concentrating on employing Artificial Intelligence (AI) methods to design different thermal prediction models. In this research work the Adaptive Neuro-Fuzzy Inference System (ANFIS) is used as the backbone for thermal error modelling. The choice of inputs to the thermal model is a non-trivial decision which is ultimately a compromise between the ability to obtain data that sufficiently correlates with the thermal distortion and the cost of implementation of the necessary feedback sensors. In this thesis, temperature measurement was supplemented by direct distortion measurement at accessible locations. The location of temperature measurement must also provide a representative measurement of the change in temperature that will affect the machine structure. The number of sensors and their locations are not always intuitive and the time required to identify the optimal locations is often prohibitive, resulting in compromise and poor results. In this thesis, a new intelligent system for reducing thermal errors of machine tools using data obtained from thermography data is introduced. Different groups of key temperature points on a machine can be identified from thermal images using a novel schema based on a Grey system theory and Fuzzy C-Means (FCM) clustering method. This novel method simplifies the modelling process, enhances the accuracy of the system and reduces the overall number of inputs to the model, since otherwise a much larger number of thermal sensors would be required to cover the entire structure. An Adaptive Neuro-Fuzzy Inference System with Fuzzy C-Means clustering (ANFIS-FCM) is then employed to design the thermal prediction model. In order to optimise the approach, a parametric study is carried out by changing the number of inputs and number of Membership Functions (MFs) to the ANFIS-FCM model, and comparing the relative robustness of the designs. The proposed approach has been validated on three different machine tools under different operation conditions. Thus the proposed system has been shown to be robust to different internal heat sources, ambient changes and is easily extensible to other CNC machine tools. Finally, the proposed method is shown to compare favourably against alternative approaches such as an Artificial Neural Network (ANN) model and different Grey models

    NASA SBIR abstracts of 1991 phase 1 projects

    Get PDF
    The objectives of 301 projects placed under contract by the Small Business Innovation Research (SBIR) program of the National Aeronautics and Space Administration (NASA) are described. These projects were selected competitively from among proposals submitted to NASA in response to the 1991 SBIR Program Solicitation. The basic document consists of edited, non-proprietary abstracts of the winning proposals submitted by small businesses. The abstracts are presented under the 15 technical topics within which Phase 1 proposals were solicited. Each project was assigned a sequential identifying number from 001 to 301, in order of its appearance in the body of the report. Appendixes to provide additional information about the SBIR program and permit cross-reference of the 1991 Phase 1 projects by company name, location by state, principal investigator, NASA Field Center responsible for management of each project, and NASA contract number are included

    Self-tune linear adaptive-genetic algorithm for feature selection

    Get PDF
    Genetic algorithm (GA) is an established machine learning technique used for heuristic optimisation purposes. However, this natural selection-based technique is prone to premature convergence, especially of the local optimum event. The presence of stagnant performance is due to low population diversity and fixed genetic operator setting. Therefore, an adaptive algorithm, the Self-Tune Linear Adaptive-GA (STLA-GA), is presented in order to avoid suboptimal solutions in feature selection case studies. STLA-GA performs parameter tuning for mutation probability rate, population size, maximum generation number and novel convergence threshold while simultaneously updating the stopping criteria by adopting an exploration-exploitation cycle. The exploration-exploitation cycle embedded in STLA-GA is a function of the latest classifier performance. Compared to standard feature selection practice, the proposed STLA-GA delivers multi-fold benefits, including overcoming local optimum solutions, yielding higher feature subset reduction rates, removing manual parameter tuning, eliminating premature convergence and preventing excessive computational cost, which is due to unstable parameter tuning feedback

    Gene expression programming for Efficient Time-series Financial Forecasting

    Get PDF
    Stock market prediction is of immense interest to trading companies and buyers due to high profit margins. The majority of successful buying or selling activities occur close to stock price turning trends. This makes the prediction of stock indices and analysis a crucial factor in the determination that whether the stocks will increase or decrease the next day. Additionally, precise prediction of the measure of increase or decrease of stock prices also plays an important role in buying/selling activities. This research presents two core aspects of stock-market prediction. Firstly, it presents a Networkbased Fuzzy Inference System (ANFIS) methodology to integrate the capabilities of neural networks with that of fuzzy logic. A specialised extension to this technique is known as the genetic programming (GP) and gene expression programming (GEP) to explore and investigate the outcome of the GEP criteria on the stock market price prediction. The research presented in this thesis aims at the modelling and prediction of short-tomedium term stock value fluctuations in the market via genetically tuned stock market parameters. The technique uses hierarchically defined GP and gene-expressionprogramming (GEP) techniques to tune algebraic functions representing the fittest equation for stock market activities. The technology achieves novelty by proposing a fractional adaptive mutation rate Elitism (GEP-FAMR) technique to initiate a balance between varied mutation rates between varied-fitness chromosomes thereby improving prediction accuracy and fitness improvement rate. The methodology is evaluated against five stock market companies with each having its own trading circumstances during the past 20+ years. The proposed GEP/GP methodologies were evaluated based on variable window/population sizes, selection methods, and Elitism, Rank and Roulette selection methods. The Elitism-based approach showed promising results with a low error-rate in the resultant pattern matching with an overall accuracy of 95.96% for short-term 5-day and 95.35% for medium-term 56-day trading periods. The contribution of this research to theory is that it presented a novel evolutionary methodology with modified selection operators for the prediction of stock exchange data via Gene expression programming. The methodology dynamically adapts the mutation rate of different fitness groups in each generation to ensure a diversification II balance between high and low fitness solutions. The GEP-FAMR approach was preferred to Neural and Fuzzy approaches because it can address well-reported problems of over-fitting, algorithmic black-boxing, and data-snooping issues via GP and GEP algorithmsSaudi Cultural Burea

    Flood Forecasting Using Machine Learning Methods

    Get PDF
    This book is a printed edition of the Special Issue Flood Forecasting Using Machine Learning Methods that was published in Wate

    Business analytics in industry 4.0: a systematic review

    Get PDF
    Recently, the term “Industry 4.0” has emerged to characterize several Information Technology and Communication (ICT) adoptions in production processes (e.g., Internet-of-Things, implementation of digital production support information technologies). Business Analytics is often used within the Industry 4.0, thus incorporating its data intelligence (e.g., statistical analysis, predictive modelling, optimization) expert system component. In this paper, we perform a Systematic Literature Review (SLR) on the usage of Business Analytics within the Industry 4.0 concept, covering a selection of 169 papers obtained from six major scientific publication sources from 2010 to March 2020. The selected papers were first classified in three major types, namely, Practical Application, Reviews and Framework Proposal. Then, we analysed with more detail the practical application studies which were further divided into three main categories of the Gartner analytical maturity model, Descriptive Analytics, Predictive Analytics and Prescriptive Analytics. In particular, we characterized the distinct analytics studies in terms of the industry application and data context used, impact (in terms of their Technology Readiness Level) and selected data modelling method. Our SLR analysis provides a mapping of how data-based Industry 4.0 expert systems are currently used, disclosing also research gaps and future research opportunities.The work of P. Cortez was supported by FCT - Fundação para a Ciência e Tecnologia within the R&D Units Project Scope: UIDB/00319/2020. We would like to thank to the three anonymous reviewers for their helpful suggestions
    corecore