CORE
🇺🇦
make metadata, not war
Services
Services overview
Explore all CORE services
Access to raw data
API
Dataset
FastSync
Content discovery
Recommender
Discovery
OAI identifiers
OAI Resolver
Managing content
Dashboard
Bespoke contracts
Consultancy services
Support us
Support us
Membership
Sponsorship
Community governance
Advisory Board
Board of supporters
Research network
About
About us
Our mission
Team
Blog
FAQs
Contact us
research
Sparse metric learning via smooth optimization
Authors
Colin Campbell
Kaizhu Huang
Yiming Ying
Publication date
15 March 2012
Publisher
Abstract
Copyright © 2009 NIPS Foundation23rd Annual Conference on Advances in Neural Information Processing Systems (NIPS 2009), Vancouver, Canada, 7-10 December 2009In this paper we study the problem of learning a low-rank (sparse) distance matrix. We propose a novel metric learning model which can simultaneously conduct dimension reduction and learn a distance matrix. The sparse representation involves a mixed-norm regularization which is non-convex. We then show that it can be equivalently formulated as a convex saddle (min-max) problem. From this saddle representation, we develop an efficient smooth optimization approach [15] for sparse metric learning, although the learning model is based on a non-differentiable loss function. This smooth optimization approach has an optimal convergence rate of O(1=t2) for smooth problems where t is the iteration number. Finally, we run experiments to validate the effectiveness and efficiency of our sparse metric learning model on various datasets
Similar works
Full text
Open in the Core reader
Download PDF
Available Versions
Supporting member
Open Research Exeter
See this paper in CORE
Go to the repository landing page
Download from data provider
oai:ore.exeter.ac.uk:10871/119...
Last time updated on 06/08/2013