87,406 research outputs found

    A unified two level online learning scheme to optimizer a distance metric

    Get PDF
    We research a novel plan of online multi-modular separation metric learning (OMDML), which investigates a brought together two-level web based learning plan: (I) it figures out how to advance a separation metric on every individual element space; and (ii) at that point it figures out how to locate the ideal mix of assorted sorts of highlights. To additionally lessen the costly expense of DML on high-dimensional element space, we propose a low-rank OMDML calculation which essentially diminishes the computational expense as well as holds profoundly contending or stunningly better learning precision

    An Optimal Combination of Diverse Distance Metrics On Multiple Modalities

    Get PDF
    We research a novel plan of online multi-modal distance metric learning (OMDML), which investigates a brought together two-level web based learning plan: (i) it figures out how to streamline a separation metric on every individual component space; and (ii) then it figures out how to locate the ideal mix of assorted sorts of elements. To additionally diminish the costly cost of DML on high-dimensional component space, we propose a low-rank OMDML algorithm which altogether lessens the computational cost as well as holds exceptionally contending or surprisingly better learning precision.

    Regression on fixed-rank positive semidefinite matrices: a Riemannian approach

    Full text link
    The paper addresses the problem of learning a regression model parameterized by a fixed-rank positive semidefinite matrix. The focus is on the nonlinear nature of the search space and on scalability to high-dimensional problems. The mathematical developments rely on the theory of gradient descent algorithms adapted to the Riemannian geometry that underlies the set of fixed-rank positive semidefinite matrices. In contrast with previous contributions in the literature, no restrictions are imposed on the range space of the learned matrix. The resulting algorithms maintain a linear complexity in the problem size and enjoy important invariance properties. We apply the proposed algorithms to the problem of learning a distance function parameterized by a positive semidefinite matrix. Good performance is observed on classical benchmarks

    A Survey on Metric Learning for Feature Vectors and Structured Data

    Full text link
    The need for appropriate ways to measure the distance or similarity between data is ubiquitous in machine learning, pattern recognition and data mining, but handcrafting such good metrics for specific problems is generally difficult. This has led to the emergence of metric learning, which aims at automatically learning a metric from data and has attracted a lot of interest in machine learning and related fields for the past ten years. This survey paper proposes a systematic review of the metric learning literature, highlighting the pros and cons of each approach. We pay particular attention to Mahalanobis distance metric learning, a well-studied and successful framework, but additionally present a wide range of methods that have recently emerged as powerful alternatives, including nonlinear metric learning, similarity learning and local metric learning. Recent trends and extensions, such as semi-supervised metric learning, metric learning for histogram data and the derivation of generalization guarantees, are also covered. Finally, this survey addresses metric learning for structured data, in particular edit distance learning, and attempts to give an overview of the remaining challenges in metric learning for the years to come.Comment: Technical report, 59 pages. Changes in v2: fixed typos and improved presentation. Changes in v3: fixed typos. Changes in v4: fixed typos and new method
    • …
    corecore