6,223 research outputs found
Modified Large Margin Nearest Neighbor Metric Learning for Regression
The main objective of this letter is to formulate a new approach of learning a Mahalanobis distance metric for nearest neighbor regression from a training sample set. We propose a modified version of the large margin nearest neighbor metric learning method to deal with regression problems. As an application, the prediction of post-operative trunk 3-D shapes in scoliosis surgery using nearest neighbor regression is described. Accuracy of the proposed method is quantitatively evaluated through experiments on real medical data.IRSC / CIH
Study and Observation of the Variation of Accuracies of KNN, SVM, LMNN, ENN Algorithms on Eleven Different Datasets from UCI Machine Learning Repository
Machine learning qualifies computers to assimilate with data, without being
solely programmed [1, 2]. Machine learning can be classified as supervised and
unsupervised learning. In supervised learning, computers learn an objective
that portrays an input to an output hinged on training input-output pairs [3].
Most efficient and widely used supervised learning algorithms are K-Nearest
Neighbors (KNN), Support Vector Machine (SVM), Large Margin Nearest Neighbor
(LMNN), and Extended Nearest Neighbor (ENN). The main contribution of this
paper is to implement these elegant learning algorithms on eleven different
datasets from the UCI machine learning repository to observe the variation of
accuracies for each of the algorithms on all datasets. Analyzing the accuracy
of the algorithms will give us a brief idea about the relationship of the
machine learning algorithms and the data dimensionality. All the algorithms are
developed in Matlab. Upon such accuracy observation, the comparison can be
built among KNN, SVM, LMNN, and ENN regarding their performances on each
dataset.Comment: To be published in the 4th IEEE International Conference on
Electrical Engineering and Information & Communication Technology (iCEEiCT
2018
An adaptive nearest neighbor rule for classification
We introduce a variant of the -nearest neighbor classifier in which is
chosen adaptively for each query, rather than supplied as a parameter. The
choice of depends on properties of each neighborhood, and therefore may
significantly vary between different points. (For example, the algorithm will
use larger for predicting the labels of points in noisy regions.)
We provide theory and experiments that demonstrate that the algorithm
performs comparably to, and sometimes better than, -NN with an optimal
choice of . In particular, we derive bounds on the convergence rates of our
classifier that depend on a local quantity we call the `advantage' which is
significantly weaker than the Lipschitz conditions used in previous convergence
rate proofs. These generalization bounds hinge on a variant of the seminal
Uniform Convergence Theorem due to Vapnik and Chervonenkis; this variant
concerns conditional probabilities and may be of independent interest
Tile2Vec: Unsupervised representation learning for spatially distributed data
Geospatial analysis lacks methods like the word vector representations and
pre-trained networks that significantly boost performance across a wide range
of natural language and computer vision tasks. To fill this gap, we introduce
Tile2Vec, an unsupervised representation learning algorithm that extends the
distributional hypothesis from natural language -- words appearing in similar
contexts tend to have similar meanings -- to spatially distributed data. We
demonstrate empirically that Tile2Vec learns semantically meaningful
representations on three datasets. Our learned representations significantly
improve performance in downstream classification tasks and, similar to word
vectors, visual analogies can be obtained via simple arithmetic in the latent
space.Comment: 8 pages, 4 figures in main text; 9 pages, 11 figures in appendi
Maximum Margin Multiclass Nearest Neighbors
We develop a general framework for margin-based multicategory classification
in metric spaces. The basic work-horse is a margin-regularized version of the
nearest-neighbor classifier. We prove generalization bounds that match the
state of the art in sample size and significantly improve the dependence on
the number of classes . Our point of departure is a nearly Bayes-optimal
finite-sample risk bound independent of . Although -free, this bound is
unregularized and non-adaptive, which motivates our main result: Rademacher and
scale-sensitive margin bounds with a logarithmic dependence on . As the best
previous risk estimates in this setting were of order , our bound is
exponentially sharper. From the algorithmic standpoint, in doubling metric
spaces our classifier may be trained on examples in time and
evaluated on new points in time
- …