456 research outputs found

    Fault diagnosis of rolling element bearing using Naïve Bayes classifier

    Get PDF
    The development of machine learning brings a new way for diagnosing the fault of rolling element bearings. However, the method in machine learning with high accuracy often has the poor ability of generalization due to the overuse of feature engineering. To address this challenge, Naïve Bayes classifier is applied in this paper. As the one of the cluster of Bayes classifiers, its ability of classification is very outstanding. In this paper, the method is provided with a detailed description for why and how to diagnose the fault of bearing. Finally, an evaluation of the performance of Naïve Bayes classifier is presented with real world data. The evaluation indicates that Naïve Bayes classifier can achieve a high level of accuracy without any feature engineering

    A novel method for self-adaptive feature extraction using scaling crossover characteristics of signals and combining with LS-SVM for multi-fault diagnosis of gearbox

    Get PDF
    Vibration signals of defective gears are usually non-stationary and masked by noise. As a result, the feature extraction of gear fault data is always an intractable problem, especially for multi-fault couple system (two or more fault types simultaneously occur in mechanical systems). Recently, an interesting crossover characteristic of nonlinear data is used to diagnose the different severities of gear faults. Nonetheless, it lacks of self-adaptivity. Consequently, a novel method for self-adaptive feature extraction using scaling crossover characteristics of signals and combining with least square support vector machine (LS-SVM) for multi-fault diagnosis of gearbox is proposed. Firstly, detrended fluctuation analysis (DFA) is introduced to analyze fractal properties and multi-scaling behaviors of vibration signal from multi-fault gearbox. The scale exponents are abrupt changed with the gradual increasing of time scales, which can be observed in the scaling-law curve. Secondly, a criterion based on a Quasi-Monte Carlo algorithm is developed to uncover optimal scaling intervals of scaling-law curve. Several different scaling regions are objectively measured in each of which a single scale exponent can be estimated. Thirdly, a three-dimensional vector, containing three scale exponents which carry definite physical meaning, is used as the feature parameter to describe the underlying dynamic mechanism hidden in gearbox vibration data. Lastly, these vectors are classified by LS-SVM. Moreover, the method of statistical parameters is exploited to classify the multi-fault vibration data which have been investigated by proposed method. The results show that the proposed method is sensitive to multi-fault vibration data of gearbox with similar fault patterns and has a better performance than other methods

    Deep Cellular Recurrent Neural Architecture for Efficient Multidimensional Time-Series Data Processing

    Get PDF
    Efficient processing of time series data is a fundamental yet challenging problem in pattern recognition. Though recent developments in machine learning and deep learning have enabled remarkable improvements in processing large scale datasets in many application domains, most are designed and regulated to handle inputs that are static in time. Many real-world data, such as in biomedical, surveillance and security, financial, manufacturing and engineering applications, are rarely static in time, and demand models able to recognize patterns in both space and time. Current machine learning (ML) and deep learning (DL) models adapted for time series processing tend to grow in complexity and size to accommodate the additional dimensionality of time. Specifically, the biologically inspired learning based models known as artificial neural networks that have shown extraordinary success in pattern recognition, tend to grow prohibitively large and cumbersome in the presence of large scale multi-dimensional time series biomedical data such as EEG. Consequently, this work aims to develop representative ML and DL models for robust and efficient large scale time series processing. First, we design a novel ML pipeline with efficient feature engineering to process a large scale multi-channel scalp EEG dataset for automated detection of epileptic seizures. With the use of a sophisticated yet computationally efficient time-frequency analysis technique known as harmonic wavelet packet transform and an efficient self-similarity computation based on fractal dimension, we achieve state-of-the-art performance for automated seizure detection in EEG data. Subsequently, we investigate the development of a novel efficient deep recurrent learning model for large scale time series processing. For this, we first study the functionality and training of a biologically inspired neural network architecture known as cellular simultaneous recurrent neural network (CSRN). We obtain a generalization of this network for multiple topological image processing tasks and investigate the learning efficacy of the complex cellular architecture using several state-of-the-art training methods. Finally, we develop a novel deep cellular recurrent neural network (CDRNN) architecture based on the biologically inspired distributed processing used in CSRN for processing time series data. The proposed DCRNN leverages the cellular recurrent architecture to promote extensive weight sharing and efficient, individualized, synchronous processing of multi-source time series data. Experiments on a large scale multi-channel scalp EEG, and a machine fault detection dataset show that the proposed DCRNN offers state-of-the-art recognition performance while using substantially fewer trainable recurrent units
    • …
    corecore