115,581 research outputs found

    Classification of EMI discharge sources using time–frequency features and multi-class support vector machine

    Get PDF
    This paper introduces the first application of feature extraction and machine learning to Electromagnetic Interference (EMI) signals for discharge sources classification in high voltage power generating plants. This work presents an investigation on signals that represent different discharge sources, which are measured using EMI techniques from operating electrical machines within power plant. The analysis involves Time-Frequency image calculation of EMI signals using General Linear Chirplet Analysis (GLCT) which reveals both time and frequency varying characteristics. Histograms of uniform Local Binary Patterns (LBP) are implemented as a feature reduction and extraction technique for the classification of discharge sources using Multi-Class Support Vector Machine (MCSVM). The novelty that this paper introduces is the combination of GLCT and LBP applications to develop a new feature extraction algorithm applied to EMI signals classification. The proposed algorithm is demonstrated to be successful with excellent classification accuracy being achieved. For the first time, this work transfers expert's knowledge on EMI faults to an intelligent system which could potentially be exploited to develop an automatic condition monitoring system

    FEATURE REDUCTION FOR COMPUTATIONALLY EFFICIENT DAMAGE STATE CLASSIFICATION USING BINARY TREE SUPPORT VECTOR MACHINES

    Get PDF
    This paper proposes a computationally efficient methodology for classifying damage in structural hotspots. Data collected from a sensor instrumented lug joint subjected to fatigue loading was preprocessed using a linear discriminant analysis (LDA) to extract features that are relevant for classification and reduce the dimensionality of the data. The data is then reduced in the feature space by analyzing the structure of the mapped clusters and removing the data points that do not affect the construction of interclass separating hyperplanes. The reduced data set is used to train a support vector machines (SVM) based classifier and the results of the classification problem are compared to those when the entire data set is used for training. To further improve the efficiency of the classification scheme, the SVM classifiers are arranged in a binary tree format to reduce the number of comparisons that are necessary. The experimental results show that the data reduction does not reduce the ability of the classifier to distinguish between classes while providing a nearly fourfold decrease in the amount of training data processed

    Impact of feature selection on system identification by means of NARX-SVM

    Get PDF
    Support Vector Machines (SVM) are widely used in many fields of science, including system identification. The selection of feature vector plays a crucial role in SVM-based model building process. In this paper, we investigate the influence of the selection of feature vector on model’s quality. We have built an SVM model with a non-linear ARX (NARX) structure. The modelled system had a SISO structure, i.e. one input signal and one output signal. The output signal was temperature, which was controlled by a Peltier module. The supply voltage of the Peltier module was the input signal. The system had a non-linear characteristic. We have evaluated the model’s quality by the fit index. The classical feature selection of SVM with NARX structure comes down to a choice of the length of the regressor vector. For SISO models, this vector is determined by two parameters: nu and ny. These parameters determine the number of past samples of input and output signals of the system used to form the vector of regressors. In the present research we have tested two methods of building the vector of regressors, one classic and one using custom regressors. The results show that the vector of regressors obtained by the classical method can be shortened while maintaining the acceptable quality of the model. By using custom regressors, the feature vector of SVM can be reduced, which means also the reduction in calculation time

    Music genre visualization and classification exploiting a small set of high-level semantic features

    Get PDF
    In this paper a system for continuous analysis, visualization and classification of musical streams is proposed. The system performs visualization and classification task by means of three high-level, semantic features extracted computing a reduction on a multidimensional low-level feature vector through the usage of Gaussian Mixture Models. The visualization of the semantic characteristics of the audio stream has been implemented by mapping the value of the high-level features on a triangular plot and by assigning to each feature a primary color. In this manner, besides having the representation of musical evolution of the signal, we have also obtained representative colors for each musical part of the analyzed streams. The classification exploits a set of one-against-one threedimensional Support Vector Machines trained on some target genres. The obtained results on visualization and classification tasks are very encouraging: our tests on heterogeneous genre streams have shown the validity of proposed approac

    Autoencoding the Retrieval Relevance of Medical Images

    Full text link
    Content-based image retrieval (CBIR) of medical images is a crucial task that can contribute to a more reliable diagnosis if applied to big data. Recent advances in feature extraction and classification have enormously improved CBIR results for digital images. However, considering the increasing accessibility of big data in medical imaging, we are still in need of reducing both memory requirements and computational expenses of image retrieval systems. This work proposes to exclude the features of image blocks that exhibit a low encoding error when learned by a n/p/nn/p/n autoencoder (p ⁣< ⁣np\!<\!n). We examine the histogram of autoendcoding errors of image blocks for each image class to facilitate the decision which image regions, or roughly what percentage of an image perhaps, shall be declared relevant for the retrieval task. This leads to reduction of feature dimensionality and speeds up the retrieval process. To validate the proposed scheme, we employ local binary patterns (LBP) and support vector machines (SVM) which are both well-established approaches in CBIR research community. As well, we use IRMA dataset with 14,410 x-ray images as test data. The results show that the dimensionality of annotated feature vectors can be reduced by up to 50% resulting in speedups greater than 27% at expense of less than 1% decrease in the accuracy of retrieval when validating the precision and recall of the top 20 hits.Comment: To appear in proceedings of The 5th International Conference on Image Processing Theory, Tools and Applications (IPTA'15), Nov 10-13, 2015, Orleans, Franc

    Properties of Support Vector Machines

    Get PDF
    Support Vector Machines (SVMs) perform pattern recognition between two point classes by finding a decision surface determined by certain points of the training set, termed Support Vectors (SV). This surface, which in some feature space of possibly infinite dimension can be regarded as a hyperplane, is obtained from the solution of a problem of quadratic programming that depends on a regularization parameter. In this paper we study some mathematical properties of support vectors and show that the decision surface can be written as the sum of two orthogonal terms, the first depending only on the margin vectors (which are SVs lying on the margin), the second proportional to the regularization parameter. For almost all values of the parameter, this enables us to predict how the decision surface varies for small parameter changes. In the special but important case of feature space of finite dimension m, we also show that there are at most m+1 margin vectors and observe that m+1 SVs are usually sufficient to fully determine the decision surface. For relatively small m this latter result leads to a consistent reduction of the SV number

    A Fast Two-Stage Classification Method of Support Vector Machines

    Get PDF
    Classification of high-dimensional data generally requires enormous processing time. In this paper, we present a fast two-stage method of support vector machines, which includes a feature reduction algorithm and a fast multiclass method. First, principal component analysis is applied to the data for feature reduction and decorrelation, and then a feature selection method is used to further reduce feature dimensionality. The criterion based on Bhattacharyya distance is revised to get rid of influence of some binary problems with large distance. Moreover, a simple method is proposed to reduce the processing time of multiclass problems, where one binary SVM with the fewest support vectors (SVs) will be selected iteratively to exclude the less similar class until the final result is obtained. Experimented with the hyperspectral data 92AV3C, the results demonstrate that the proposed method can achieve a much faster classification and preserve the high classification accuracy of SVMs
    corecore