62 research outputs found
On Security and Sparsity of Linear Classifiers for Adversarial Settings
Machine-learning techniques are widely used in security-related applications,
like spam and malware detection. However, in such settings, they have been
shown to be vulnerable to adversarial attacks, including the deliberate
manipulation of data at test time to evade detection. In this work, we focus on
the vulnerability of linear classifiers to evasion attacks. This can be
considered a relevant problem, as linear classifiers have been increasingly
used in embedded systems and mobile devices for their low processing time and
memory requirements. We exploit recent findings in robust optimization to
investigate the link between regularization and security of linear classifiers,
depending on the type of attack. We also analyze the relationship between the
sparsity of feature weights, which is desirable for reducing processing cost,
and the security of linear classifiers. We further propose a novel octagonal
regularizer that allows us to achieve a proper trade-off between them. Finally,
we empirically show how this regularizer can improve classifier security and
sparsity in real-world application examples including spam and malware
detection
An accelerated MDM algorithm for SVM training
This is an electronic version of the paper presented at the 16th European Symposium on Artificial Neural Networks, held in Bruges on 2018In this work we will propose an acceleration procedure for the
MitchellâDemyanovâMalozemov (MDM) algorithm (a fast geometric algorithm
for SVM construction) that may yield quite large training savings.
While decomposition algorithms such as SVMLight or SMO are usually the
SVM methods of choice, we shall show that there is a relationship between
SMO and MDM that suggests that, at least in their simplest implementations,
they should have similar training speeds. Thus, and although we
will not discuss it here, the proposed MDM acceleration might be used as
a starting point to new ways of accelerating SMO.With partial support of Spainâs TIN 2004â07676 and TIN 2007â66862 projects. The first
author is kindly supported by FPU-MEC grant reference AP2006-02285
Matching Image Sets via Adaptive Multi Convex Hull
Traditional nearest points methods use all the samples in an image set to
construct a single convex or affine hull model for classification. However,
strong artificial features and noisy data may be generated from combinations of
training samples when significant intra-class variations and/or noise occur in
the image set. Existing multi-model approaches extract local models by
clustering each image set individually only once, with fixed clusters used for
matching with various image sets. This may not be optimal for discrimination,
as undesirable environmental conditions (eg. illumination and pose variations)
may result in the two closest clusters representing different characteristics
of an object (eg. frontal face being compared to non-frontal face). To address
the above problem, we propose a novel approach to enhance nearest points based
methods by integrating affine/convex hull classification with an adapted
multi-model approach. We first extract multiple local convex hulls from a query
image set via maximum margin clustering to diminish the artificial variations
and constrain the noise in local convex hulls. We then propose adaptive
reference clustering (ARC) to constrain the clustering of each gallery image
set by forcing the clusters to have resemblance to the clusters in the query
image set. By applying ARC, noisy clusters in the query set can be discarded.
Experiments on Honda, MoBo and ETH-80 datasets show that the proposed method
outperforms single model approaches and other recent techniques, such as Sparse
Approximated Nearest Points, Mutual Subspace Method and Manifold Discriminant
Analysis.Comment: IEEE Winter Conference on Applications of Computer Vision (WACV),
201
Nearest convex hull classification
Consider the classification task of assigning a test object to
one of two or more possible groups, or classes. An intuitive way to proceed
is to assign the object to that class, to which the distance is minimal. As
a distance measure to a class, we propose here to use the distance to the
convex hull of that class. Hence the name Nearest Convex Hull (NCH)
classification for the method. Convex-hull overlap is handled through the
introduction of slack variables and kernels. In spirit and computationally
the method is therefore close to the popular Support Vector Machine
(SVM) classifier. Advantages of the NCH classifier are its robustness
to outliers, good regularization properties and relatively easy handling
of multi-class problems. We compare the performance of NCH against
state-of-art techniques and report promising results
A randomized algorithm for large scale support vector learning
We propose a randomized algorithm for large scale SVM learning which solves the problem by iterating over random subsets of the data. Crucial to the algorithm for scalability is the size of the subsets chosen. In the context of text classification we show that, by using ideas from random projections, a sample size of O(log n) can be used to obtain a solution which is close to the optimal with a high probability. Experiments done on synthetic and real life data sets demonstrate that the algorithm scales up SVM learners, without loss in accuracy
An Exponential Lower Bound on the Complexity of Regularization Paths
For a variety of regularized optimization problems in machine learning,
algorithms computing the entire solution path have been developed recently.
Most of these methods are quadratic programs that are parameterized by a single
parameter, as for example the Support Vector Machine (SVM). Solution path
algorithms do not only compute the solution for one particular value of the
regularization parameter but the entire path of solutions, making the selection
of an optimal parameter much easier.
It has been assumed that these piecewise linear solution paths have only
linear complexity, i.e. linearly many bends. We prove that for the support
vector machine this complexity can be exponential in the number of training
points in the worst case. More strongly, we construct a single instance of n
input points in d dimensions for an SVM such that at least \Theta(2^{n/2}) =
\Theta(2^d) many distinct subsets of support vectors occur as the
regularization parameter changes.Comment: Journal version, 28 Pages, 5 Figure
An enhanced resampling technique for imbalanced data sets
A data set is considered imbalanced if the distribution of instances in one class (majority class) outnumbers the other class (minority class). The main problem related
to binary imbalanced data sets is classifiers tend to ignore the minority class. Numerous resampling techniques such as undersampling, oversampling, and a combination of both techniques have been widely used. However, the undersampling and oversampling techniques suffer from elimination and addition of relevant data which may lead to poor classification results. Hence, this study aims to increase classification metrics by enhancing the undersampling technique and combining it
with an existing oversampling technique. To achieve this objective, a Fuzzy Distancebased
Undersampling (FDUS) is proposed. Entropy estimation is used to produce fuzzy thresholds to categorise the instances in majority and minority class into membership functions. FDUS is then combined with the Synthetic Minority
Oversampling TEchnique (SMOTE) known as FDUS+SMOTE, which is executed in sequence until a balanced data set is achieved. FDUS and FDUS+SMOTE are compared with four techniques based on classification accuracy, F-measure and Gmean. From the results, FDUS achieved better classification accuracy, F-measure and G-mean, compared to the other techniques with an average of 80.57%, 0.85 and 0.78, respectively. This showed that fuzzy logic when incorporated with Distance-based Undersampling technique was able to reduce the elimination of relevant data. Further, the findings showed that FDUS+SMOTE performed better than combination of
SMOTE and Tomek Links, and SMOTE and Edited Nearest Neighbour on benchmark data sets. FDUS+SMOTE has minimised the removal of relevant data from the majority class and avoid overfitting. On average, FDUS and FDUS+SMOTE were able to balance categorical, integer and real data sets and enhanced the performance
of binary classification. Furthermore, the techniques performed well on small record
size data sets that have of instances in the range of approximately 100 to 800
- âŠ