11 research outputs found

    Application of Information-Geometric Support Vector Machine on Fault Diagnosis of Hydraulic Pump

    Get PDF
    The growing demand for the safety and reliability in industries triggers the development of condition monitoring and fault diagnosis technologies. Hydraulic pump is the critical part of a hydraulic system. The diagnosis of hydraulic pump is very crucial for reliability. This paper presents a method based on information-geometric support vector machine (IG-SVM), which is employed for fault diagnosis of hydraulic pump. The IG-SVM, which uses information geometry to modify SVM, improves the performance in a data dependent way. To diagnose faults of hydraulic pump, a residual error generator is designed based on the IG-SVM. This residual error generator is firstly trained using data from normal state. Then, it can be used for fault clustering by analysis of the residual error. Its feasibility and efficiency has also been validated via a plunger pump test-bed

    Chaotic information-geometric support vector machine and its application to fault diagnosis of hydraulic pumps

    Get PDF
    Fault diagnosis of rotating machineries is becoming important because of the complexity of modern industrial systems and the increasing demands for quality, cost efficiency, reliability, and safety. In this study, an information-geometric support vector machine used in conjunction with chaos theory (chaotic IG-SVM) is presented and applied to practical fault diagnosis of hydraulic pumps, which are critical components of aircraft. First, the phase-space reconstruction of chaos theory is used to determine the dimensions of input vectors for IG-SVM, which uses information geometry to modify SVM and improves performance in a data-dependent manner without prior knowledge or manual intervention. Chaotic IG-SVM is trained by using the dataset from the normal state without fault, and a residual error generator is then designed to detect failures based on the trained chaotic IG-SVM. Failures can be diagnosed by analyzing residual error. Chaotic IG-SVM can then be used for fault clustering by analyzing residual error. Finally, two case studies are presented, and the performance and effectiveness of the proposed method are validated

    Total Jensen divergences: Definition, Properties and k-Means++ Clustering

    Full text link
    We present a novel class of divergences induced by a smooth convex function called total Jensen divergences. Those total Jensen divergences are invariant by construction to rotations, a feature yielding regularization of ordinary Jensen divergences by a conformal factor. We analyze the relationships between this novel class of total Jensen divergences and the recently introduced total Bregman divergences. We then proceed by defining the total Jensen centroids as average distortion minimizers, and study their robustness performance to outliers. Finally, we prove that the k-means++ initialization that bypasses explicit centroid computations is good enough in practice to guarantee probabilistically a constant approximation factor to the optimal k-means clustering.Comment: 27 page

    Chaotic information-geometric support vector machine and its application to fault diagnosis of hydraulic pumps

    Get PDF
    Fault diagnosis of rotating machineries is becoming important because of the complexity of modern industrial systems and the increasing demands for quality, cost efficiency, reliability, and safety. In this study, an information-geometric support vector machine used in conjunction with chaos theory (chaotic IG-SVM) is presented and applied to practical fault diagnosis of hydraulic pumps, which are critical components of aircraft. First, the phase-space reconstruction of chaos theory is used to determine the dimensions of input vectors for IG-SVM, which uses information geometry to modify SVM and improves performance in a data-dependent manner without prior knowledge or manual intervention. Chaotic IG-SVM is trained by using the dataset from the normal state without fault, and a residual error generator is then designed to detect failures based on the trained chaotic IG-SVM. Failures can be diagnosed by analyzing residual error. Chaotic IG-SVM can then be used for fault clustering by analyzing residual error. Finally, two case studies are presented, and the performance and effectiveness of the proposed method are validated

    Automatic Defect Detection for TFT-LCD Array Process Using Quasiconformal Kernel Support Vector Data Description

    Get PDF
    Defect detection has been considered an efficient way to increase the yield rate of panels in thin film transistor liquid crystal display (TFT-LCD) manufacturing. In this study we focus on the array process since it is the first and key process in TFT-LCD manufacturing. Various defects occur in the array process, and some of them could cause great damage to the LCD panels. Thus, how to design a method that can robustly detect defects from the images captured from the surface of LCD panels has become crucial. Previously, support vector data description (SVDD) has been successfully applied to LCD defect detection. However, its generalization performance is limited. In this paper, we propose a novel one-class machine learning method, called quasiconformal kernel SVDD (QK-SVDD) to address this issue. The QK-SVDD can significantly improve generalization performance of the traditional SVDD by introducing the quasiconformal transformation into a predefined kernel. Experimental results, carried out on real LCD images provided by an LCD manufacturer in Taiwan, indicate that the proposed QK-SVDD not only obtains a high defect detection rate of 96%, but also greatly improves generalization performance of SVDD. The improvement has shown to be over 30%. In addition, results also show that the QK-SVDD defect detector is able to accomplish the task of defect detection on an LCD image within 60 ms

    Adaptive spectrum transformation by topology preserving on indefinite proximity data

    Get PDF
    Similarity-based representation generates indefinite matrices, which are inconsistent with classical kernel-based learning frameworks. In this paper, we present an adaptive spectrum transformation method that provides a positive semidefinite ( psd ) kernel consistent with the intrinsic geometry of proximity data. In the proposed method, an indefinite similarity matrix is rectified by maximizing the Euclidian fac- tor ( EF ) criterion, which represents the similarity of the resulting feature space to Euclidean space. This maximization is achieved by modifying volume elements through applying a conformal transform over the similarity matrix. We performed several experiments to evaluate the performance of the proposed method in comparison with flip, clip, shift , and square spectrum transformation techniques on similarity matrices. Applying the resulting psd matrices as kernels in dimensionality reduction and clustering problems confirms the success of the proposed approach in adapting to data and preserving its topological information. Our experiments show that in classification applications, the superiority of the proposed method is considerable when the negative eigenfraction of the similarity matrix is significant

    Positive semi-definite embedding for dimensionality reduction and out-of-sample extensions

    Full text link
    In machine learning or statistics, it is often desirable to reduce the dimensionality of a sample of data points in a high dimensional space Rd\mathbb{R}^d. This paper introduces a dimensionality reduction method where the embedding coordinates are the eigenvectors of a positive semi-definite kernel obtained as the solution of an infinite dimensional analogue of a semi-definite program. This embedding is adaptive and non-linear. A main feature of our approach is the existence of a non-linear out-of-sample extension formula of the embedding coordinates, called a projected Nystr\"om approximation. This extrapolation formula yields an extension of the kernel matrix to a data-dependent Mercer kernel function. Our empirical results indicate that this embedding method is more robust with respect to the influence of outliers, compared with a spectral embedding method.Comment: 16 pages, 5 figures. Improved presentatio

    Kernel learning over the manifold of symmetric positive definite matrices for dimensionality reduction in a BCI application

    Get PDF
    In this paper, we propose a kernel for nonlinear dimensionality reduction over the manifold of Symmetric Positive Definite (SPD) matrices in a Motor Imagery (MI)-based Brain Computer Interface (BCI) application. The proposed kernel, which is based on Riemannian geometry, tries to preserve the topology of data points in the feature space. Topology preservation is the main challenge in nonlinear dimensionality reduction (NLDR). Our main idea is to decrease the non-Euclidean characteristics of the manifold by modifying the volume elements. We apply a conformal transform over data-dependent isometric mapping to reduce the negative eigen fraction to learn a data dependent kernel over the Riemannian manifolds. Multiple experiments were carried out using the proposed kernel for a dimensionality reduction of SPD matrices that describe the EEG signals of dataset IIa from BCI competition IV. The experiments show that this kernel adapts to the input data and leads to promising results in comparison with the most popular manifold learning methods and the Common Spatial Pattern (CSP) technique as a reference algorithm in BCI competitions. The proposed kernel is strong, particularly in the cases where data points have a complex and nonlinear separable distribution

    A Multikernel-Like Learning Algorithm Based on Data Probability Distribution

    Get PDF
    In the machine learning based on kernel tricks, people often put one variable of a kernel function on the given samples to produce the basic functions of a solution space of learning problem. If the collection of the given samples deviates from the data distribution, the solution space spanned by these basic functions will also deviate from the real solution space of learning problem. In this paper a multikernel-like learning algorithm based on data probability distribution (MKDPD) is proposed, in which the parameters of a kernel function are locally adjusted according to the data probability distribution, and thus produces different kernel functions. These different kernel functions will generate different Reproducing Kernel Hilbert Spaces (RKHS). The direct sum of the subspaces of these RKHS constitutes the solution space of learning problem. Furthermore, based on the proposed MKDPD algorithm, a new algorithm for labeling new coming data is proposed, in which the basic functions are retrained according to the new coming data, while the coefficients of the retrained basic functions remained unchanged to label the new coming data. The experimental results presented in this paper show the effectiveness of the proposed algorithms
    corecore