35,272 research outputs found

    Parametric Local Metric Learning for Nearest Neighbor Classification

    Full text link
    We study the problem of learning local metrics for nearest neighbor classification. Most previous works on local metric learning learn a number of local unrelated metrics. While this "independence" approach delivers an increased flexibility its downside is the considerable risk of overfitting. We present a new parametric local metric learning method in which we learn a smooth metric matrix function over the data manifold. Using an approximation error bound of the metric matrix function we learn local metrics as linear combinations of basis metrics defined on anchor points over different regions of the instance space. We constrain the metric matrix function by imposing on the linear combinations manifold regularization which makes the learned metric matrix function vary smoothly along the geodesics of the data manifold. Our metric learning method has excellent performance both in terms of predictive power and scalability. We experimented with several large-scale classification problems, tens of thousands of instances, and compared it with several state of the art metric learning methods, both global and local, as well as to SVM with automatic kernel selection, all of which it outperforms in a significant manner

    Study and Observation of the Variation of Accuracies of KNN, SVM, LMNN, ENN Algorithms on Eleven Different Datasets from UCI Machine Learning Repository

    Full text link
    Machine learning qualifies computers to assimilate with data, without being solely programmed [1, 2]. Machine learning can be classified as supervised and unsupervised learning. In supervised learning, computers learn an objective that portrays an input to an output hinged on training input-output pairs [3]. Most efficient and widely used supervised learning algorithms are K-Nearest Neighbors (KNN), Support Vector Machine (SVM), Large Margin Nearest Neighbor (LMNN), and Extended Nearest Neighbor (ENN). The main contribution of this paper is to implement these elegant learning algorithms on eleven different datasets from the UCI machine learning repository to observe the variation of accuracies for each of the algorithms on all datasets. Analyzing the accuracy of the algorithms will give us a brief idea about the relationship of the machine learning algorithms and the data dimensionality. All the algorithms are developed in Matlab. Upon such accuracy observation, the comparison can be built among KNN, SVM, LMNN, and ENN regarding their performances on each dataset.Comment: To be published in the 4th IEEE International Conference on Electrical Engineering and Information & Communication Technology (iCEEiCT 2018

    The Traveling Salesman Problem in the Natural Environment

    Get PDF
    Is it possible for humans to navigate in the natural environment wherein the path taken between various destinations is 'optimal' in some way? In the domain of optimization this challenge is traditionally framed as the "Traveling Salesman Problem" (TSP). What strategies and ecological considerations are plausible for human navigation? When given a two-dimensional map-like presentation of the destinations, participants solve this optimization exceptionally well (only 2-3% longer than optimum)^1, 2^. In the following experiments we investigate the effect of effort and its environmental affordance on navigation decisions when humans solve the TSP in the natural environment. Fifteen locations were marked on two outdoor landscapes with flat and varied terrains respectively. Performance in the flat-field condition was excellent (∼6% error) and was worse but still quite good in the variable-terrain condition (∼20% error), suggesting participants do not globally pre-plan routes but rather develop them on the fly. We suggest that perceived effort guides participant solutions due to the dynamic constraints of effortful locomotion and obstacle avoidance
    • …
    corecore