523 research outputs found

    Research of multi-concurrent fault diagnosis of rotating machinery based on VMD and KICA

    Get PDF
    In order to improve the multi-concurrent fault diagnosis of rotating machinery, a feature extraction method based on variational mode decomposition (VMD) and kernel independent component analysis (KICA) is proposed. First, use VMD to improve the dimension of single-channel vibration signal. Then, calculate the correlation coefficient between the signal of each dimension and the original signal. Finally, high correlation signals are used to form a new observation signal and the fault signals will be extracted by KICA. Compared with ensemble empirical mode decomposition (EEMD) + fast independent component analysis (FastICA), the better performance of the proposed method is demonstrated by an analysis of rolling bearing with the fault of inner ring and outer ring mixed. Furthermore, an experiment with the fault of outer ring of rolling bearing and gear breaking mixed verifies the effectiveness of this method. The result demonstrates that the proposed method is efficient for fault diagnosis of single-channel vibration signal of rotating machinery with multi-concurrent faults

    Monitoring Nonlinear and Non-Gaussian Processes Using Gaussian Mixture Model-Based Weighted Kernel Independent Component Analysis

    Full text link

    Two view learning: SVM-2K, theory and practice

    Get PDF
    Kernel methods make it relatively easy to define complex highdimensional feature spaces. This raises the question of how we can identify the relevant subspaces for a particular learning task. When two views of the same phenomenon are available kernel Canonical Correlation Analysis (KCCA) has been shown to be an effective preprocessing step that can improve the performance of classification algorithms such as the Support Vector Machine (SVM). This paper takes this observation to its logical conclusion and proposes a method that combines this two stage learning (KCCA followed by SVM) into a single optimisation termed SVM-2K. We present both experimental and theoretical analysis of the approach showing encouraging results and insights

    Common Representation Learning Using Step-based Correlation Multi-Modal CNN

    Full text link
    Deep learning techniques have been successfully used in learning a common representation for multi-view data, wherein the different modalities are projected onto a common subspace. In a broader perspective, the techniques used to investigate common representation learning falls under the categories of canonical correlation-based approaches and autoencoder based approaches. In this paper, we investigate the performance of deep autoencoder based methods on multi-view data. We propose a novel step-based correlation multi-modal CNN (CorrMCNN) which reconstructs one view of the data given the other while increasing the interaction between the representations at each hidden layer or every intermediate step. Finally, we evaluate the performance of the proposed model on two benchmark datasets - MNIST and XRMB. Through extensive experiments, we find that the proposed model achieves better performance than the current state-of-the-art techniques on joint common representation learning and transfer learning tasks.Comment: Accepted in Asian Conference of Pattern Recognition (ACPR-2017

    Kernal based speaker specific feature extraction and its applications in iTaukei cross language speaker recognition

    Get PDF
    Extraction and classification algorithms based on kernel nonlinear features are popular in the new direction of research in machine learning. This research paper considers their practical application in the iTaukei automatic speaker recognition system (ASR) for cross-language speech recognition. Second, nonlinear speaker-specific extraction methods such as kernel principal component analysis (KPCA), kernel independent component analysis (KICA), and kernel linear discriminant analysis (KLDA) are summarized. The conversion effects on subsequent classifications were tested in conjunction with Gaussian mixture modeling (GMM) learning algorithms; in most cases, computations were found to have a beneficial effect on classification performance. Additionally, the best results were achieved by the Kernel linear discriminant analysis (KLDA) algorithm. The performance of the ASR system is evaluated for clear speech to a wide range of speech quality using ATR Japanese C language corpus and self-recorded iTaukei corpus. The ASR efficiency of KLDA, KICA, and KLDA technique for 6 sec of ATR Japanese C language corpus 99.7%, 99.6%, and 99.1% and equal error rate (EER) are 1.95%, 2.31%, and 3.41% respectively. The EER improvement of the KLDA technique-based ASR system compared with KICA and KPCA is 4.25% and 8.51% respectively
    corecore