2,996 research outputs found
Model Selection for Support Vector Machine Classification
We address the problem of model selection for Support Vector Machine (SVM)
classification. For fixed functional form of the kernel, model selection
amounts to tuning kernel parameters and the slack penalty coefficient . We
begin by reviewing a recently developed probabilistic framework for SVM
classification. An extension to the case of SVMs with quadratic slack penalties
is given and a simple approximation for the evidence is derived, which can be
used as a criterion for model selection. We also derive the exact gradients of
the evidence in terms of posterior averages and describe how they can be
estimated numerically using Hybrid Monte Carlo techniques. Though
computationally demanding, the resulting gradient ascent algorithm is a useful
baseline tool for probabilistic SVM model selection, since it can locate maxima
of the exact (unapproximated) evidence. We then perform extensive experiments
on several benchmark data sets. The aim of these experiments is to compare the
performance of probabilistic model selection criteria with alternatives based
on estimates of the test error, namely the so-called ``span estimate'' and
Wahba's Generalized Approximate Cross-Validation (GACV) error. We find that all
the ``simple'' model criteria (Laplace evidence approximations, and the Span
and GACV error estimates) exhibit multiple local optima with respect to the
hyperparameters. While some of these give performance that is competitive with
results from other approaches in the literature, a significant fraction lead to
rather higher test errors. The results for the evidence gradient ascent method
show that also the exact evidence exhibits local optima, but these give test
errors which are much less variable and also consistently lower than for the
simpler model selection criteria
Support Vector Machine classification of strong gravitational lenses
The imminent advent of very large-scale optical sky surveys, such as Euclid
and LSST, makes it important to find efficient ways of discovering rare objects
such as strong gravitational lens systems, where a background object is
multiply gravitationally imaged by a foreground mass. As well as finding the
lens systems, it is important to reject false positives due to intrinsic
structure in galaxies, and much work is in progress with machine learning
algorithms such as neural networks in order to achieve both these aims. We
present and discuss a Support Vector Machine (SVM) algorithm which makes use of
a Gabor filterbank in order to provide learning criteria for separation of
lenses and non-lenses, and demonstrate using blind challenges that under
certain circumstances it is a particularly efficient algorithm for rejecting
false positives. We compare the SVM engine with a large-scale human examination
of 100000 simulated lenses in a challenge dataset, and also apply the SVM
method to survey images from the Kilo-Degree Survey.Comment: Accepted by MNRA
Mean field variational Bayesian inference for support vector machine classification
A mean field variational Bayes approach to support vector machines (SVMs)
using the latent variable representation on Polson & Scott (2012) is presented.
This representation allows circumvention of many of the shortcomings associated
with classical SVMs including automatic penalty parameter selection, the
ability to handle dependent samples, missing data and variable selection. We
demonstrate on simulated and real datasets that our approach is easily
extendable to non-standard situations and outperforms the classical SVM
approach whilst remaining computationally efficient.Comment: 18 pages, 4 figure
Generalizing, Decoding, and Optimizing Support Vector Machine Classification
The classification of complex data usually requires the composition of processing steps. Here, a major challenge is the selection of optimal algorithms for preprocessing and classification. Nowadays, parts of the optimization process are automized but expert knowledge and manual work are still required. We present three steps to face this process and ease the optimization. Namely, we take a theoretical view on classical classifiers, provide an approach to interpret the classifier together with the preprocessing, and integrate both into one framework which enables a semiautomatic optimization of the processing chain and which interfaces numerous algorithms
Bag-of-Features Image Indexing and Classification in Microsoft SQL Server Relational Database
This paper presents a novel relational database architecture aimed to visual
objects classification and retrieval. The framework is based on the
bag-of-features image representation model combined with the Support Vector
Machine classification and is integrated in a Microsoft SQL Server database.Comment: 2015 IEEE 2nd International Conference on Cybernetics (CYBCONF),
Gdynia, Poland, 24-26 June 201
HILDA: A Discourse Parser Using Support Vector Machine Classification
Discourse structures have a central role in several computational tasks, such as question-answering or dialogue generation. In particular, the framework of the Rhetorical Structure Theory (RST) offers a sound formalism for hierarchical text organization. In this article, we present HILDA, an implemented discourse parser based on RST and Support Vector Machine (SVM) classification. SVM classifiers are trained and applied to discourse segmentation and relation labeling. By combining labeling with a greedy bottom-up tree building approach, we are able to create accurate discourse trees in linear time complexity. Importantly, our parser can parse entire texts, whereas the publicly available parser SPADE (Soricut and Marcu 2003) is limited to sentence level analysis. HILDA outperforms other discourse parsers for tree structure construction and discourse relation labeling. For the discourse parsing task, our system reaches 78.3% of the performance level of human annotators. Compared to a state-of-the-art rule-based discourse parser, our system achieves a performance increase of 11.6%
- …