7,808 research outputs found

    Support Vector Regression Based S-transform for Prediction of Single and Multiple Power Quality Disturbances

    Get PDF
    This paper presents a novel approach using Support Vector Regression (SVR) based S-transform to predict the classes of single and multiple power quality disturbances in a three-phase industrial power system. Most of the power quality disturbances recorded in an industrial power system are non-stationary and comprise of multiple power quality disturbances that coexist together for only a short duration in time due to the contribution of the network impedances and types of customers’ connected loads. The ability to detect and predict all the types of power quality disturbances encrypted in a voltage signal is vital in the analyses on the causes of the power quality disturbances and in the identification of incipient fault in the networks. In this paper, the performances of two types of SVR based S-transform, the non-linear radial basis function (RBF) SVR based S-transform and the multilayer perceptron (MLP) SVR based S-transform, were compared for their abilities in making prediction for the classes of single and multiple power quality disturbances. The results for the analyses of 651 numbers of single and multiple voltage disturbances gave prediction accuracies of 86.1% (MLP SVR) and 93.9% (RBF SVR) respectively. Keywords: Power Quality, Power Quality Prediction, S-transform, SVM, SV

    From Cutting Planes Algorithms to Compression Schemes and Active Learning

    Get PDF
    Cutting-plane methods are well-studied localization(and optimization) algorithms. We show that they provide a natural framework to perform machinelearning ---and not just to solve optimization problems posed by machinelearning--- in addition to their intended optimization use. In particular, theyallow one to learn sparse classifiers and provide good compression schemes.Moreover, we show that very little effort is required to turn them intoeffective active learning methods. This last property provides a generic way todesign a whole family of active learning algorithms from existing passivemethods. We present numerical simulations testifying of the relevance ofcutting-plane methods for passive and active learning tasks.Comment: IJCNN 2015, Jul 2015, Killarney, Ireland. 2015, \<http://www.ijcnn.org/\&g

    Cholesky-factorized sparse Kernel in support vector machines

    Get PDF
    Support Vector Machine (SVM) is one of the most powerful machine learning algorithms due to its convex optimization formulation and handling non-linear classification. However, one of its main drawbacks is the long time it takes to train large data sets. This limitation is often aroused when applying non-linear kernels (e.g. RBF Kernel) which are usually required to obtain better separation for linearly inseparable data sets. In this thesis, we study an approach that aims to speed-up the training time by combining both the better performance of RBF kernels and fast training by a linear solver, LIBLINEAR. The approach uses an RBF kernel with a sparse matrix which is factorized using Cholesky decomposition. The method is tested on large artificial and real data sets and compared to the standard RBF and linear kernels where both the accuracy and training time are reported. For most data sets, the result shows a huge training time reduction, over 90\%, whilst maintaining the accuracy
    • …
    corecore