1,313 research outputs found

    Intelligent fault classification of rolling bearings using neural network and discrete wavelet transform

    Get PDF
    This paper is about diagnosis and classification of bearing faults using Neural Networks (NN), employing nondestructive tests. Vibration signals are acquired by a bearing test machine. The acquired signals are preprocessed using discrete wavelet analysis. Standard deviation of discrete wavelet coefficient is chosen as the distinguishing feature of the faults. This feature vector is given to the design network as inputs. The input vector is normalized prior to be applied to neural network. There are four output neurons each of which corresponds to: 1) bearing with inner race fault, 2) bearing with outer race fault, 3) bearing with ball defect, and 4) normal bearing. The structure of NN is 6:20:4 and with 99 % performance

    Semi-supervised multiscale dual-encoding method for faulty traffic data detection

    Full text link
    Inspired by the recent success of deep learning in multiscale information encoding, we introduce a variational autoencoder (VAE) based semi-supervised method for detection of faulty traffic data, which is cast as a classification problem. Continuous wavelet transform (CWT) is applied to the time series of traffic volume data to obtain rich features embodied in time-frequency representation, followed by a twin of VAE models to separately encode normal data and faulty data. The resulting multiscale dual encodings are concatenated and fed to an attention-based classifier, consisting of a self-attention module and a multilayer perceptron. For comparison, the proposed architecture is evaluated against five different encoding schemes, including (1) VAE with only normal data encoding, (2) VAE with only faulty data encoding, (3) VAE with both normal and faulty data encodings, but without attention module in the classifier, (4) siamese encoding, and (5) cross-vision transformer (CViT) encoding. The first four encoding schemes adopted the same convolutional neural network (CNN) architecture while the fifth encoding scheme follows the transformer architecture of CViT. Our experiments show that the proposed architecture with the dual encoding scheme, coupled with attention module, outperforms other encoding schemes and results in classification accuracy of 96.4%, precision of 95.5%, and recall of 97.7%.Comment: 16 pages, 8 figure

    On-Line Learning and Wavelet-Based Feature Extraction Methodology for Process Monitoring using High-Dimensional Functional Data

    Get PDF
    The recent advances in information technology, such as the various automatic data acquisition systems and sensor systems, have created tremendous opportunities for collecting valuable process data. The timely processing of such data for meaningful information remains a challenge. In this research, several data mining methodology that will aid information streaming of high-dimensional functional data are developed. For on-line implementations, two weighting functions for updating support vector regression parameters were developed. The functions use parameters that can be easily set a priori with the slightest knowledge of the data involved and have provision for lower and upper bounds for the parameters. The functions are applicable to time series predictions, on-line predictions, and batch predictions. In order to apply these functions for on-line predictions, a new on-line support vector regression algorithm that uses adaptive weighting parameters was presented. The new algorithm uses varying rather than fixed regularization constant and accuracy parameter. The developed algorithm is more robust to the volume of data available for on-line training as well as to the relative position of the available data in the training sequence. The algorithm improves prediction accuracy by reducing uncertainty in using fixed values for the regression parameters. It also improves prediction accuracy by reducing uncertainty in using regression values based on some experts’ knowledge rather than on the characteristics of the incoming training data. The developed functions and algorithm were applied to feedwater flow rate data and two benchmark time series data. The results show that using adaptive regression parameters performs better than using fixed regression parameters. In order to reduce the dimension of data with several hundreds or thousands of predictors and enhance prediction accuracy, a wavelet-based feature extraction procedure called step-down thresholding procedure for identifying and extracting significant features for a single curve was developed. The procedure involves transforming the original spectral into wavelet coefficients. It is based on multiple hypothesis testing approach and it controls family-wise error rate in order to guide against selecting insignificant features without any concern about the amount of noise that may be present in the data. Therefore, the procedure is applicable for data-reduction and/or data-denoising. The procedure was compared to six other data-reduction and data-denoising methods in the literature. The developed procedure is found to consistently perform better than most of the popular methods and performs at the same level with the other methods. Many real-world data with high-dimensional explanatory variables also sometimes have multiple response variables; therefore, the selection of the fewest explanatory variables that show high sensitivity to predicting the response variable(s) and low sensitivity to the noise in the data is important for better performance and reduced computational burden. In order to select the fewest explanatory variables that can predict each of the response variables better, a two-stage wavelet-based feature extraction procedure is proposed. The first stage uses step-down procedure to extract significant features for each of the curves. Then, representative features are selected out of the extracted features for all curves using voting selection strategy. Other selection strategies such as union and intersection were also described and implemented. The essence of the first stage is to reduce the dimension of the data without any consideration for whether or not they can predict the response variables accurately. The second stage uses Bayesian decision theory approach to select some of the extracted wavelet coefficients that can predict each of the response variables accurately. The two stage procedure was implemented using near-infrared spectroscopy data and shaft misalignment data. The results show that the second stage further reduces the dimension and the prediction results are encouraging

    Laplace-domain analysis of fluid line networks with applications to time-domain simulation and system parameter identification.

    Get PDF
    Networks of closed conduits containing pressurised fluid flow occur in many different instances throughout the natural and man made world. The dynamics of such networks are dependent not only on the complex interactions between the fluid body and the conduit material within each fluid line, but also on the coupling between different lines as they influence each other through their common junctions. The forward modelling (time-domain simulation), and inverse modelling (system parameter identification) of such systems is of great interest to many different research fields. An alternative approach to time-domain descriptions of fluid line networks is the Laplace-domain representation of these systems. A long standing limitation of these methods is that the frameworks for constructing Laplace-domain models have not been suitable for pipeline networks of an arbitrary topology. The objective of this thesis is to fundamentally extend the existing theory for Laplace-domain descriptions of hydraulic networks and explore the applications of this theory to forward and inverse modelling. The extensions are undertaken by the use of graph theory concepts to construct network admittance matrices based on the Laplace-domain solutions of the fundamental pipeline dynamics. This framework is extended to incorporate a very broad class of hydraulic elements. Through the use of the numerical inverse Laplace transform, the proposed theory forms the basis for an accurate and computationally efficient hydraulic network time-domain simulation methodology. The compact analytic nature of the network admittance matrix representation facilitates the development of two successful and statistically based parameter identification methodologies, one based on an oblique filtering approach combined with maximum likelihood estimation, and the other based on the expectation-maximisation algorithm.Thesis (Ph.D.) -- University of Adelaide, School of Civil, Environmental and Mining Engineering, 201

    Knowledge-based fault detection using time-frequency analysis

    Get PDF
    This work studies a fault detection method which analyzes sensor data for changes in their characteristics to detect the occurrence of faults in a dynamic system. The test system considered in this research is a Boeing-747 aircraft system and the faults considered are the actuator faults in the aircraft. The method is an alternative to conventional fault detection method and does not rely on analytical mathematical models but acquires knowledge about the system through experiments. In this work, we test the concept that the energy distribution of resolution than the windowed Fourier transform. Verification of the proposed methodology is carried in two parts. The first set of experiments considers entire data as a single window. Results show that the method effectively classifies the indicators by more that 85% as correct detections. The second set of experiments verifies the method for online fault detection. It is observed that the mean detection delay was less than 8 seconds. We also developed a simple graphical user interface to run the online fault detection

    Fault Detection and Diagnosis with Imbalanced and Noisy Data: A Hybrid Framework for Rotating Machinery

    Full text link
    Fault diagnosis plays an essential role in reducing the maintenance costs of rotating machinery manufacturing systems. In many real applications of fault detection and diagnosis, data tend to be imbalanced, meaning that the number of samples for some fault classes is much less than the normal data samples. At the same time, in an industrial condition, accelerometers encounter high levels of disruptive signals and the collected samples turn out to be heavily noisy. As a consequence, many traditional Fault Detection and Diagnosis (FDD) frameworks get poor classification performances when dealing with real-world circumstances. Three main solutions have been proposed in the literature to cope with this problem: (1) the implementation of generative algorithms to increase the amount of under-represented input samples, (2) the employment of a classifier being powerful to learn from imbalanced and noisy data, (3) the development of an efficient data pre-processing including feature extraction and data augmentation. This paper proposes a hybrid framework which uses the three aforementioned components to achieve an effective signal-based FDD system for imbalanced conditions. Specifically, it first extracts the fault features, using Fourier and wavelet transforms to make full use of the signals. Then, it employs Wasserstein Generative Adversarial Networks (WGAN) to generate synthetic samples to populate the rare fault class and enhance the training set. Moreover, to achieve a higher performance a novel combination of Convolutional Long Short-term Memory (CLSTM) and Weighted Extreme Learning Machine (WELM) is proposed. To verify the effectiveness of the developed framework, different datasets settings on different imbalance severities and noise degrees were used. The comparative results demonstrate that in different scenarios GAN-CLSTM-ELM outperforms the other state-of-the-art FDD frameworks.Comment: 23 pages, 11 figure

    Power Quality Management and Classification for Smart Grid Application using Machine Learning

    Get PDF
    The Efficient Wavelet-based Convolutional Transformer network (EWT-ConvT) is proposed to detect power quality disturbances in time-frequency domain using attention mechanism. The support of machine learning further improves the network accuracy with synthetic signal generation and less system complexity under practical environment. The proposed EWT-ConvT can achieve 94.42% accuracy which is superior than other deep learning models. The detection of disturbances using EWT-ConvT can also be implemented into smart grid applications for real-time embedded system development
    • …
    corecore