56,114 research outputs found
Peak Criterion for Choosing Gaussian Kernel Bandwidth in Support Vector Data Description
Support Vector Data Description (SVDD) is a machine-learning technique used
for single class classification and outlier detection. SVDD formulation with
kernel function provides a flexible boundary around data. The value of kernel
function parameters affects the nature of the data boundary. For example, it is
observed that with a Gaussian kernel, as the value of kernel bandwidth is
lowered, the data boundary changes from spherical to wiggly. The spherical data
boundary leads to underfitting, and an extremely wiggly data boundary leads to
overfitting. In this paper, we propose empirical criterion to obtain good
values of the Gaussian kernel bandwidth parameter. This criterion provides a
smooth boundary that captures the essential geometric features of the data
Multiclass latent locally linear support vector machines
Kernelized Support Vector Machines (SVM) have gained the status of off-the-shelf classifiers, able to deliver state of the art performance on almost any problem. Still, their practical use is constrained by their computational and memory complexity, which grows super-linearly with the number of training samples. In order to retain the low training and testing complexity of linear classifiers and the exibility of non linear ones, a growing, promising alternative is represented by methods that learn non-linear classifiers through local combinations of linear ones. In this paper we propose a new multi class local classifier, based on a latent SVM formulation. The proposed classifier makes use of a set of linear models that are linearly combined using sample and class specific weights. Thanks to the latent formulation, the combination coefficients are modeled as latent variables. We allow soft combinations and we provide a closed-form solution for their estimation, resulting in an efficient prediction rule. This novel formulation allows to learn in a principled way the sample specific weights and the linear classifiers, in a unique optimization problem, using a CCCP optimization procedure. Extensive experiments on ten standard UCI machine learning datasets, one large binary dataset, three character and digit recognition databases, and a visual place categorization dataset show the power of the proposed approach
One-Class Classification: Taxonomy of Study and Review of Techniques
One-class classification (OCC) algorithms aim to build classification models
when the negative class is either absent, poorly sampled or not well defined.
This unique situation constrains the learning of efficient classifiers by
defining class boundary just with the knowledge of positive class. The OCC
problem has been considered and applied under many research themes, such as
outlier/novelty detection and concept learning. In this paper we present a
unified view of the general problem of OCC by presenting a taxonomy of study
for OCC problems, which is based on the availability of training data,
algorithms used and the application domains applied. We further delve into each
of the categories of the proposed taxonomy and present a comprehensive
literature review of the OCC algorithms, techniques and methodologies with a
focus on their significance, limitations and applications. We conclude our
paper by discussing some open research problems in the field of OCC and present
our vision for future research.Comment: 24 pages + 11 pages of references, 8 figure
The Default Risk of Firms Examined with Smooth Support Vector Machines
In the era of Basel II a powerful tool for bankruptcy prognosis is vital for banks. The tool must be precise but also easily adaptable to the bank's objections regarding the relation of false acceptances (Type I error) and false rejections (Type II error). We explore the suitabil- ity of Smooth Support Vector Machines (SSVM), and investigate how important factors such as selection of appropriate accounting ratios (predictors), length of training period and structure of the training sample in°uence the precision of prediction. Furthermore we show that oversampling can be employed to gear the tradeo® between error types. Finally, we illustrate graphically how di®erent variants of SSVM can be used jointly to support the decision task of loan o±cers.Insolvency Prognosis, SVMs, Statistical Learning Theory, Non-parametric Classification models, local time-homogeneity
- …