263 research outputs found
The Metric Nearness Problem
Metric nearness refers to the problem of optimally restoring metric properties to
distance measurements that happen to be nonmetric due to measurement errors or otherwise. Metric
data can be important in various settings, for example, in clustering, classification, metric-based
indexing, query processing, and graph theoretic approximation algorithms. This paper formulates
and solves the metric nearness problem: Given a set of pairwise dissimilarities, find a ānearestā set
of distances that satisfy the properties of a metricāprincipally the triangle inequality. For solving
this problem, the paper develops efficient triangle fixing algorithms that are based on an iterative
projection method. An intriguing aspect of the metric nearness problem is that a special case turns
out to be equivalent to the all pairs shortest paths problem. The paper exploits this equivalence and
develops a new algorithm for the latter problem using a primal-dual method. Applications to graph
clustering are provided as an illustration. We include experiments that demonstrate the computational
superiority of triangle fixing over general purpose convex programming software. Finally, we
conclude by suggesting various useful extensions and generalizations to metric nearness
An efficient algorithm for the norm based metric nearness problem
Given a dissimilarity matrix, the metric nearness problem is to find the
nearest matrix of distances that satisfy the triangle inequalities. This
problem has wide applications, such as sensor networks, image processing, and
so on. But it is of great challenge even to obtain a moderately accurate
solution due to the metric constraints and the nonsmooth objective
function which is usually a weighted norm based distance. In this
paper, we propose a delayed constraint generation method with each subproblem
solved by the semismooth Newton based proximal augmented Lagrangian method
(PALM) for the metric nearness problem. Due to the high memory requirement for
the storage of the matrix related to the metric constraints, we take advantage
of the special structure of the matrix and do not need to store the
corresponding constraint matrix. A pleasing aspect of our algorithm is that we
can solve these problems involving up to variables and
constraints. Numerical experiments demonstrate the efficiency of our algorithm.
In theory, firstly, under a mild condition, we establish a primal-dual error
bound condition which is very essential for the analysis of local convergence
rate of PALM. Secondly, we prove the equivalence between the dual nondegeneracy
condition and nonsingularity of the generalized Jacobian for the inner
subproblem of PALM. Thirdly, when or
, without the strict complementarity condition, we also
prove the equivalence between the the dual nondegeneracy condition and the
uniqueness of the primal solution
A dual basis approach to multidimensional scaling: spectral analysis and graph regularity
Classical multidimensional scaling (CMDS) is a technique that aims to embed a
set of objects in a Euclidean space given their pairwise Euclidean distance
matrix. The main part of CMDS is based on double centering a squared distance
matrix and employing a truncated eigendecomposition to recover the point
coordinates. A central result in CMDS connects the squared Euclidean matrix to
a Gram matrix derived from the set of points. In this paper, we study a dual
basis approach to classical multidimensional scaling. We give an explicit
formula for the dual basis and fully characterize the spectrum of an essential
matrix in the dual basis framework. We make connections to a related problem in
metric nearness.Comment: 9 page
Learning User Preferences to Incentivize Exploration in the Sharing Economy
We study platforms in the sharing economy and discuss the need for
incentivizing users to explore options that otherwise would not be chosen. For
instance, rental platforms such as Airbnb typically rely on customer reviews to
provide users with relevant information about different options. Yet, often a
large fraction of options does not have any reviews available. Such options are
frequently neglected as viable choices, and in turn are unlikely to be
evaluated, creating a vicious cycle. Platforms can engage users to deviate from
their preferred choice by offering monetary incentives for choosing a different
option instead. To efficiently learn the optimal incentives to offer, we
consider structural information in user preferences and introduce a novel
algorithm - Coordinated Online Learning (CoOL) - for learning with structural
information modeled as convex constraints. We provide formal guarantees on the
performance of our algorithm and test the viability of our approach in a user
study with data of apartments on Airbnb. Our findings suggest that our approach
is well-suited to learn appropriate incentives and increase exploration on the
investigated platform.Comment: Longer version of AAAI'18 paper. arXiv admin note: text overlap with
arXiv:1702.0284
- ā¦