125,430 research outputs found

    Online power quality disturbance detection by support vector machine in smart meter

    Get PDF
    Power quality assessment is an important performance measurement in smart grids. Utility companies are interested in power quality monitoring even in the low level distribution side such as smart meters. Addressing this issue, in this study, we propose segregation of the power disturbance from regular values using one-class support vector machine (OCSVM). To precisely detect the power disturbances of a voltage wave, some practical wavelet filters are applied. Considering the unlimited types of waveform abnormalities, OCSVM is picked as a semi-supervised machine learning algorithm which needs to be trained solely on a relatively large sample of normal data. This model is able to automatically detect the existence of any types of disturbances in real time, even unknown types which are not available in the training time. In the case of existence, the disturbances are further classified into different types such as sag, swell, transients and unbalanced. Being light weighted and fast, the proposed technique can be integrated into smart grid devices such as smart meter in order to perform a real-time disturbance monitoring. The continuous monitoring of power quality in smart meters will give helpful insight for quality power transmission and management

    Fast Incremental SVDD Learning Algorithm with the Gaussian Kernel

    Full text link
    Support vector data description (SVDD) is a machine learning technique that is used for single-class classification and outlier detection. The idea of SVDD is to find a set of support vectors that defines a boundary around data. When dealing with online or large data, existing batch SVDD methods have to be rerun in each iteration. We propose an incremental learning algorithm for SVDD that uses the Gaussian kernel. This algorithm builds on the observation that all support vectors on the boundary have the same distance to the center of sphere in a higher-dimensional feature space as mapped by the Gaussian kernel function. Each iteration involves only the existing support vectors and the new data point. Moreover, the algorithm is based solely on matrix manipulations; the support vectors and their corresponding Lagrange multiplier αi\alpha_i's are automatically selected and determined in each iteration. It can be seen that the complexity of our algorithm in each iteration is only O(k2)O(k^2), where kk is the number of support vectors. Experimental results on some real data sets indicate that FISVDD demonstrates significant gains in efficiency with almost no loss in either outlier detection accuracy or objective function value.Comment: 18 pages, 1 table, 4 figure

    Bounded Coordinate-Descent for Biological Sequence Classification in High Dimensional Predictor Space

    Full text link
    We present a framework for discriminative sequence classification where the learner works directly in the high dimensional predictor space of all subsequences in the training set. This is possible by employing a new coordinate-descent algorithm coupled with bounding the magnitude of the gradient for selecting discriminative subsequences fast. We characterize the loss functions for which our generic learning algorithm can be applied and present concrete implementations for logistic regression (binomial log-likelihood loss) and support vector machines (squared hinge loss). Application of our algorithm to protein remote homology detection and remote fold recognition results in performance comparable to that of state-of-the-art methods (e.g., kernel support vector machines). Unlike state-of-the-art classifiers, the resulting classification models are simply lists of weighted discriminative subsequences and can thus be interpreted and related to the biological problem
    • …
    corecore