1,338 research outputs found
A Survey on Metric Learning for Feature Vectors and Structured Data
The need for appropriate ways to measure the distance or similarity between
data is ubiquitous in machine learning, pattern recognition and data mining,
but handcrafting such good metrics for specific problems is generally
difficult. This has led to the emergence of metric learning, which aims at
automatically learning a metric from data and has attracted a lot of interest
in machine learning and related fields for the past ten years. This survey
paper proposes a systematic review of the metric learning literature,
highlighting the pros and cons of each approach. We pay particular attention to
Mahalanobis distance metric learning, a well-studied and successful framework,
but additionally present a wide range of methods that have recently emerged as
powerful alternatives, including nonlinear metric learning, similarity learning
and local metric learning. Recent trends and extensions, such as
semi-supervised metric learning, metric learning for histogram data and the
derivation of generalization guarantees, are also covered. Finally, this survey
addresses metric learning for structured data, in particular edit distance
learning, and attempts to give an overview of the remaining challenges in
metric learning for the years to come.Comment: Technical report, 59 pages. Changes in v2: fixed typos and improved
presentation. Changes in v3: fixed typos. Changes in v4: fixed typos and new
method
Positive Semidefinite Metric Learning Using Boosting-like Algorithms
The success of many machine learning and pattern recognition methods relies
heavily upon the identification of an appropriate distance metric on the input
data. It is often beneficial to learn such a metric from the input training
data, instead of using a default one such as the Euclidean distance. In this
work, we propose a boosting-based technique, termed BoostMetric, for learning a
quadratic Mahalanobis distance metric. Learning a valid Mahalanobis distance
metric requires enforcing the constraint that the matrix parameter to the
metric remains positive definite. Semidefinite programming is often used to
enforce this constraint, but does not scale well and easy to implement.
BoostMetric is instead based on the observation that any positive semidefinite
matrix can be decomposed into a linear combination of trace-one rank-one
matrices. BoostMetric thus uses rank-one positive semidefinite matrices as weak
learners within an efficient and scalable boosting-based learning process. The
resulting methods are easy to implement, efficient, and can accommodate various
types of constraints. We extend traditional boosting algorithms in that its
weak learner is a positive semidefinite matrix with trace and rank being one
rather than a classifier or regressor. Experiments on various datasets
demonstrate that the proposed algorithms compare favorably to those
state-of-the-art methods in terms of classification accuracy and running time.Comment: 30 pages, appearing in Journal of Machine Learning Researc
Parametric Local Metric Learning for Nearest Neighbor Classification
We study the problem of learning local metrics for nearest neighbor
classification. Most previous works on local metric learning learn a number of
local unrelated metrics. While this "independence" approach delivers an
increased flexibility its downside is the considerable risk of overfitting. We
present a new parametric local metric learning method in which we learn a
smooth metric matrix function over the data manifold. Using an approximation
error bound of the metric matrix function we learn local metrics as linear
combinations of basis metrics defined on anchor points over different regions
of the instance space. We constrain the metric matrix function by imposing on
the linear combinations manifold regularization which makes the learned metric
matrix function vary smoothly along the geodesics of the data manifold. Our
metric learning method has excellent performance both in terms of predictive
power and scalability. We experimented with several large-scale classification
problems, tens of thousands of instances, and compared it with several state of
the art metric learning methods, both global and local, as well as to SVM with
automatic kernel selection, all of which it outperforms in a significant
manner
Study and Observation of the Variation of Accuracies of KNN, SVM, LMNN, ENN Algorithms on Eleven Different Datasets from UCI Machine Learning Repository
Machine learning qualifies computers to assimilate with data, without being
solely programmed [1, 2]. Machine learning can be classified as supervised and
unsupervised learning. In supervised learning, computers learn an objective
that portrays an input to an output hinged on training input-output pairs [3].
Most efficient and widely used supervised learning algorithms are K-Nearest
Neighbors (KNN), Support Vector Machine (SVM), Large Margin Nearest Neighbor
(LMNN), and Extended Nearest Neighbor (ENN). The main contribution of this
paper is to implement these elegant learning algorithms on eleven different
datasets from the UCI machine learning repository to observe the variation of
accuracies for each of the algorithms on all datasets. Analyzing the accuracy
of the algorithms will give us a brief idea about the relationship of the
machine learning algorithms and the data dimensionality. All the algorithms are
developed in Matlab. Upon such accuracy observation, the comparison can be
built among KNN, SVM, LMNN, and ENN regarding their performances on each
dataset.Comment: To be published in the 4th IEEE International Conference on
Electrical Engineering and Information & Communication Technology (iCEEiCT
2018
- …