564,810 research outputs found
On-line support vector machines for function approximation
This paper describes an on-line method for building epsilon-insensitive support vector machines for regression as described in (Vapnik, 1995). The method is an extension of the method developed by (Cauwenberghs & Poggio, 2000) for building incremental support vector machines for classification. Machines obtained by using this approach are equivalent to the ones obtained by applying exact methods like quadratic programming, but they are obtained more quickly and allow the incremental addition of new points, removal of existing points and update of target values for existing data. This development opens the application of SVM regression to areas such as on-line prediction of temporal series or generalization of value functions in reinforcement learning.Postprint (published version
On robustness properties of convex risk minimization methods for pattern recognition
The paper brings together methods from two disciplines: machine learning theory and robust statistics. Robustness properties of machine learning methods based on convex risk minimization are investigated for the problem of pattern recognition. Assumptions are given for the existence of the influence function of the classifiers and for bounds of the influence function. Kernel logistic regression, support vector machines, least squares and the AdaBoost loss function are treated as special cases. A sensitivity analysis of the support vector machine is given. --AdaBoost loss function,influence function,kernel logistic regression,robustness,sensitivity curve,statistical learning,support vector machine,total variation
Fast Bayesian Optimization of Machine Learning Hyperparameters on Large Datasets
Bayesian optimization has become a successful tool for hyperparameter
optimization of machine learning algorithms, such as support vector machines or
deep neural networks. Despite its success, for large datasets, training and
validating a single configuration often takes hours, days, or even weeks, which
limits the achievable performance. To accelerate hyperparameter optimization,
we propose a generative model for the validation error as a function of
training set size, which is learned during the optimization process and allows
exploration of preliminary configurations on small subsets, by extrapolating to
the full dataset. We construct a Bayesian optimization procedure, dubbed
Fabolas, which models loss and training time as a function of dataset size and
automatically trades off high information gain about the global optimum against
computational cost. Experiments optimizing support vector machines and deep
neural networks show that Fabolas often finds high-quality solutions 10 to 100
times faster than other state-of-the-art Bayesian optimization methods or the
recently proposed bandit strategy Hyperband
Extreme Entropy Machines: Robust information theoretic classification
Most of the existing classification methods are aimed at minimization of
empirical risk (through some simple point-based error measured with loss
function) with added regularization. We propose to approach this problem in a
more information theoretic way by investigating applicability of entropy
measures as a classification model objective function. We focus on quadratic
Renyi's entropy and connected Cauchy-Schwarz Divergence which leads to the
construction of Extreme Entropy Machines (EEM).
The main contribution of this paper is proposing a model based on the
information theoretic concepts which on the one hand shows new, entropic
perspective on known linear classifiers and on the other leads to a
construction of very robust method competetitive with the state of the art
non-information theoretic ones (including Support Vector Machines and Extreme
Learning Machines).
Evaluation on numerous problems spanning from small, simple ones from UCI
repository to the large (hundreads of thousands of samples) extremely
unbalanced (up to 100:1 classes' ratios) datasets shows wide applicability of
the EEM in real life problems and that it scales well
Using Support Vector Machine for Prediction Dynamic Voltage Collapse in an Actual Power System
Abstract—This paper presents dynamic voltage collapse
prediction on an actual power system using support vector machines.
Dynamic voltage collapse prediction is first determined based on the
PTSI calculated from information in dynamic simulation output.
Simulations were carried out on a practical 87 bus test system by
considering load increase as the contingency. The data collected from
the time domain simulation is then used as input to the SVM in which
support vector regression is used as a predictor to determine the
dynamic voltage collapse indices of the power system. To reduce
training time and improve accuracy of the SVM, the Kernel function
type and Kernel parameter are considered. To verify the
effectiveness of the proposed SVM method, its performance is
compared with the multi layer perceptron neural network (MLPNN).
Studies show that the SVM gives faster and more accurate results for
dynamic voltage collapse prediction compared with the MLPNN.
Keywor ds —Dynamic voltage collapse, prediction, artificial
neural network, support vector machines
Tabu search model selection for SVM
International audienceA model selection method based on tabu search is proposed to build support vector machines (binary decision functions) of reduced complexity and efficient generalization. The aim is to build a fast and efficient support vector machines classifier. A criterion is defined to evaluate the decision function quality which blends recognition rate and the complexity of a binary decision functions together. The selection of the simplification level by vector quantization, of a feature subset and of support vector machines hyperparameters are performed by tabu search method to optimize the defined decision function quality criterion in order to find a good sub-optimal model on tractable times
Speaker-independent emotion recognition exploiting a psychologically-inspired binary cascade classification schema
In this paper, a psychologically-inspired binary cascade classification schema is proposed for speech emotion recognition. Performance is enhanced because commonly confused pairs of emotions are distinguishable from one another. Extracted features are related to statistics of pitch, formants, and energy contours, as well as spectrum, cepstrum, perceptual and temporal features, autocorrelation, MPEG-7 descriptors, Fujisakis model parameters, voice quality, jitter, and shimmer. Selected features are fed as input to K nearest neighborhood classifier and to support vector machines. Two kernels are tested for the latter: Linear and Gaussian radial basis function. The recently proposed speaker-independent experimental protocol is tested on the Berlin emotional speech database for each gender separately. The best emotion recognition accuracy, achieved by support vector machines with linear kernel, equals 87.7%, outperforming state-of-the-art approaches. Statistical analysis is first carried out with respect to the classifiers error rates and then to evaluate the information expressed by the classifiers confusion matrices. © Springer Science+Business Media, LLC 2011
- …