2,655 research outputs found
On Correcting Inputs: Inverse Optimization for Online Structured Prediction
Algorithm designers typically assume that the input data is correct, and then
proceed to find "optimal" or "sub-optimal" solutions using this input data.
However this assumption of correct data does not always hold in practice,
especially in the context of online learning systems where the objective is to
learn appropriate feature weights given some training samples. Such scenarios
necessitate the study of inverse optimization problems where one is given an
input instance as well as a desired output and the task is to adjust the input
data so that the given output is indeed optimal. Motivated by learning
structured prediction models, in this paper we consider inverse optimization
with a margin, i.e., we require the given output to be better than all other
feasible outputs by a desired margin. We consider such inverse optimization
problems for maximum weight matroid basis, matroid intersection, perfect
matchings, minimum cost maximum flows, and shortest paths and derive the first
known results for such problems with a non-zero margin. The effectiveness of
these algorithmic approaches to online learning for structured prediction is
also discussed.Comment: Conference version to appear in FSTTCS, 201
On the tradeoff between stability and fit
In computing, as in many aspects of life, changes incur cost. Many optimization problems are formulated as a one-time instance starting from scratch. However, a common case that arises is when we already have a set of prior assignments and must decide how to respond to a new set of constraints, given that each change from the current assignment comes at a price. That is, we would like to maximize the fitness or efficiency of our system, but we need to balance it with the changeout cost from the previous state.
We provide a precise formulation for this tradeoff and analyze the resulting stable extensions of some fundamental problems in measurement and analytics. Our main technical contribution is a stable extension of Probability Proportional to Size (PPS) weighted random sampling, with applications to monitoring and anomaly detection problems. We also provide a general framework that applies to top-k, minimum spanning tree, and assignment. In both cases, we are able to provide exact solutions and discuss efficient incremental algorithms that can find new solutions as the input changes
Pairwise MRF Calibration by Perturbation of the Bethe Reference Point
We investigate different ways of generating approximate solutions to the
pairwise Markov random field (MRF) selection problem. We focus mainly on the
inverse Ising problem, but discuss also the somewhat related inverse Gaussian
problem because both types of MRF are suitable for inference tasks with the
belief propagation algorithm (BP) under certain conditions. Our approach
consists in to take a Bethe mean-field solution obtained with a maximum
spanning tree (MST) of pairwise mutual information, referred to as the
\emph{Bethe reference point}, for further perturbation procedures. We consider
three different ways following this idea: in the first one, we select and
calibrate iteratively the optimal links to be added starting from the Bethe
reference point; the second one is based on the observation that the natural
gradient can be computed analytically at the Bethe point; in the third one,
assuming no local field and using low temperature expansion we develop a dual
loop joint model based on a well chosen fundamental cycle basis. We indeed
identify a subclass of planar models, which we refer to as \emph{Bethe-dual
graph models}, having possibly many loops, but characterized by a singly
connected dual factor graph, for which the partition function and the linear
response can be computed exactly in respectively O(N) and operations,
thanks to a dual weight propagation (DWP) message passing procedure that we set
up. When restricted to this subclass of models, the inverse Ising problem being
convex, becomes tractable at any temperature. Experimental tests on various
datasets with refined or regularization procedures indicate that
these approaches may be competitive and useful alternatives to existing ones.Comment: 54 pages, 8 figure. section 5 and refs added in V
Robust Rotation Synchronization via Low-rank and Sparse Matrix Decomposition
This paper deals with the rotation synchronization problem, which arises in
global registration of 3D point-sets and in structure from motion. The problem
is formulated in an unprecedented way as a "low-rank and sparse" matrix
decomposition that handles both outliers and missing data. A minimization
strategy, dubbed R-GoDec, is also proposed and evaluated experimentally against
state-of-the-art algorithms on simulated and real data. The results show that
R-GoDec is the fastest among the robust algorithms.Comment: The material contained in this paper is part of a manuscript
submitted to CVI
Graph Sparsification, Spectral Sketches, and Faster Resistance Computation, via Short Cycle Decompositions
We develop a framework for graph sparsification and sketching, based on a new
tool, short cycle decomposition -- a decomposition of an unweighted graph into
an edge-disjoint collection of short cycles, plus few extra edges. A simple
observation gives that every graph G on n vertices with m edges can be
decomposed in time into cycles of length at most , and at most
extra edges. We give an time algorithm for constructing a
short cycle decomposition, with cycles of length , and
extra edges. These decompositions enable us to make progress on several open
questions:
* We give an algorithm to find -approximations to effective
resistances of all edges in time , improving over
the previous best of .
This gives an algorithm to approximate the determinant of a Laplacian up to
in time.
* We show existence and efficient algorithms for constructing graphical
spectral sketches -- a distribution over sparse graphs H such that for a fixed
vector , we have w.h.p. and
. This implies the existence of
resistance-sparsifiers with about edges that preserve the
effective resistances between every pair of vertices up to
* By combining short cycle decompositions with known tools in graph
sparsification, we show the existence of nearly-linear sized degree-preserving
spectral sparsifiers, as well as significantly sparser approximations of
directed graphs. The latter is critical to recent breakthroughs on faster
algorithms for solving linear systems in directed Laplacians.
Improved algorithms for constructing short cycle decompositions will lead to
improvements for each of the above results.Comment: 80 page
Combinatorial algorithms for inverse network flow problems
"(Revised January 25, 1998)"--T.p. -- "February 1998."--Cover.Includes bibliographical references (p. 23-25).Supported by a grant from the United Parcel Service and a contract from the Office of Naval Research. ONR N00014-96-1-0051Ravindra K. Ahuja, James B. Orlin
Locality in Network Optimization
In probability theory and statistics notions of correlation among random
variables, decay of correlation, and bias-variance trade-off are fundamental.
In this work we introduce analogous notions in optimization, and we show their
usefulness in a concrete setting. We propose a general notion of correlation
among variables in optimization procedures that is based on the sensitivity of
optimal points upon (possibly finite) perturbations. We present a canonical
instance in network optimization (the min-cost network flow problem) that
exhibits locality, i.e., a setting where the correlation decays as a function
of the graph-theoretical distance in the network. In the case of warm-start
reoptimization, we develop a general approach to localize a given optimization
routine in order to exploit locality. We show that the localization mechanism
is responsible for introducing a bias in the original algorithm, and that the
bias-variance trade-off that emerges can be exploited to minimize the
computational complexity required to reach a prescribed level of error
accuracy. We provide numerical evidence to support our claims
Playing with Duality: An Overview of Recent Primal-Dual Approaches for Solving Large-Scale Optimization Problems
Optimization methods are at the core of many problems in signal/image
processing, computer vision, and machine learning. For a long time, it has been
recognized that looking at the dual of an optimization problem may drastically
simplify its solution. Deriving efficient strategies which jointly brings into
play the primal and the dual problems is however a more recent idea which has
generated many important new contributions in the last years. These novel
developments are grounded on recent advances in convex analysis, discrete
optimization, parallel processing, and non-smooth optimization with emphasis on
sparsity issues. In this paper, we aim at presenting the principles of
primal-dual approaches, while giving an overview of numerical methods which
have been proposed in different contexts. We show the benefits which can be
drawn from primal-dual algorithms both for solving large-scale convex
optimization problems and discrete ones, and we provide various application
examples to illustrate their usefulness
- âŠ