1,106 research outputs found
Large Margin Nearest Neighbor Embedding for Knowledge Representation
Traditional way of storing facts in triplets ({\it head\_entity, relation,
tail\_entity}), abbreviated as ({\it h, r, t}), makes the knowledge intuitively
displayed and easily acquired by mankind, but hardly computed or even reasoned
by AI machines. Inspired by the success in applying {\it Distributed
Representations} to AI-related fields, recent studies expect to represent each
entity and relation with a unique low-dimensional embedding, which is different
from the symbolic and atomic framework of displaying knowledge in triplets. In
this way, the knowledge computing and reasoning can be essentially facilitated
by means of a simple {\it vector calculation}, i.e. . We thus contribute an effective model to learn better embeddings
satisfying the formula by pulling the positive tail entities to
get together and close to {\bf h} + {\bf r} ({\it Nearest Neighbor}), and
simultaneously pushing the negatives away from the positives
via keeping a {\it Large Margin}. We also design a corresponding
learning algorithm to efficiently find the optimal solution based on {\it
Stochastic Gradient Descent} in iterative fashion. Quantitative experiments
illustrate that our approach can achieve the state-of-the-art performance,
compared with several latest methods on some benchmark datasets for two
classical applications, i.e. {\it Link prediction} and {\it Triplet
classification}. Moreover, we analyze the parameter complexities among all the
evaluated models, and analytical results indicate that our model needs fewer
computational resources on outperforming the other methods.Comment: arXiv admin note: text overlap with arXiv:1503.0815
Modified Large Margin Nearest Neighbor Metric Learning for Regression
The main objective of this letter is to formulate a new approach of learning a Mahalanobis distance metric for nearest neighbor regression from a training sample set. We propose a modified version of the large margin nearest neighbor metric learning method to deal with regression problems. As an application, the prediction of post-operative trunk 3-D shapes in scoliosis surgery using nearest neighbor regression is described. Accuracy of the proposed method is quantitatively evaluated through experiments on real medical data.IRSC / CIH
Convergence of Multi-pass Large Margin Nearest Neighbor Metric Learning
Göpfert C, PaaĂen B, Hammer B. Convergence of Multi-pass Large Margin Nearest Neighbor Metric Learning. In: E.P. Villa A, Masulli P, Pons Rivero AJ, eds. Artificial Neural Networks and Machine Learning â ICANN 2016: 25th International Conference on Artificial Neural Networks, Barcelona, Spain, September 6-9, 2016, Proceedings, Part II. Lecture Notes in Computer Science. Vol 9887. Cham: Springer Nature; 2016: 510-517.Large margin nearest neighbor classification (LMNN) is a popular technique to learn a metric that improves the accuracy of a simple k-nearest neighbor classifier via a convex optimization scheme. However, the optimization problem is convex only under the assumption that the nearest neighbors within classes remain constant. In this contribution we show that an iterated LMNN scheme (multi-pass LMNN) is a valid optimization technique for the original LMNN cost function without this assumption. We further provide an empirical evaluation of multi-pass LMNN, demonstrating that multi-pass LMNN can lead to notable improvements in classification accuracy for some datasets and does not necessarily show strong overfitting tendencies as reported before
Convergence of Multi-pass Large Margin Nearest Neighbor Metric Learning
Göpfert C, PaaĂen B, Hammer B. Convergence of Multi-pass Large Margin Nearest Neighbor Metric Learning. In: E.P. Villa A, Masulli P, Pons Rivero AJ, eds. Artificial Neural Networks and Machine Learning â ICANN 2016: 25th International Conference on Artificial Neural Networks, Barcelona, Spain, September 6-9, 2016, Proceedings, Part II. Lecture Notes in Computer Science. Vol 9887. Cham: Springer Nature; 2016: 510-517.Large margin nearest neighbor classification (LMNN) is a popular technique to learn a metric that improves the accuracy of a simple k-nearest neighbor classifier via a convex optimization scheme. However, the optimization problem is convex only under the assumption that the nearest neighbors within classes remain constant. In this contribution we show that an iterated LMNN scheme (multi-pass LMNN) is a valid optimization technique for the original LMNN cost function without this assumption. We further provide an empirical evaluation of multi-pass LMNN, demonstrating that multi-pass LMNN can lead to notable improvements in classification accuracy for some datasets and does not necessarily show strong overfitting tendencies as reported before
Study and Observation of the Variation of Accuracies of KNN, SVM, LMNN, ENN Algorithms on Eleven Different Datasets from UCI Machine Learning Repository
Machine learning qualifies computers to assimilate with data, without being
solely programmed [1, 2]. Machine learning can be classified as supervised and
unsupervised learning. In supervised learning, computers learn an objective
that portrays an input to an output hinged on training input-output pairs [3].
Most efficient and widely used supervised learning algorithms are K-Nearest
Neighbors (KNN), Support Vector Machine (SVM), Large Margin Nearest Neighbor
(LMNN), and Extended Nearest Neighbor (ENN). The main contribution of this
paper is to implement these elegant learning algorithms on eleven different
datasets from the UCI machine learning repository to observe the variation of
accuracies for each of the algorithms on all datasets. Analyzing the accuracy
of the algorithms will give us a brief idea about the relationship of the
machine learning algorithms and the data dimensionality. All the algorithms are
developed in Matlab. Upon such accuracy observation, the comparison can be
built among KNN, SVM, LMNN, and ENN regarding their performances on each
dataset.Comment: To be published in the 4th IEEE International Conference on
Electrical Engineering and Information & Communication Technology (iCEEiCT
2018
A LightGBM-Based EEG Analysis Method for Driver Mental States Classification
Fatigue driving can easily lead to road traffic accidents and bring great harm to individuals and families. Recently, electroencephalography-
(EEG-) based physiological and brain activities for fatigue detection have been increasingly investigated.
However, how to find an effective method or model to timely and efficiently detect the mental states of drivers still remains a
challenge. In this paper, we combine common spatial pattern (CSP) and propose a light-weighted classifier, LightFD, which is
based on gradient boosting framework for EEG mental states identification. ,e comparable results with traditional classifiers,
such as support vector machine (SVM), convolutional neural network (CNN), gated recurrent unit (GRU), and large margin
nearest neighbor (LMNN), show that the proposed model could achieve better classification performance, as well as the decision
efficiency. Furthermore, we also test and validate that LightFD has better transfer learning performance in EEG classification of
driver mental states. In summary, our proposed LightFD classifier has better performance in real-time EEG mental state
prediction, and it is expected to have broad application prospects in practical brain-computer interaction (BCI)
- âŠ