926 research outputs found
Decentralization Estimators for Instrumental Variable Quantile Regression Models
The instrumental variable quantile regression (IVQR) model (Chernozhukov and
Hansen, 2005) is a popular tool for estimating causal quantile effects with
endogenous covariates. However, estimation is complicated by the non-smoothness
and non-convexity of the IVQR GMM objective function. This paper shows that the
IVQR estimation problem can be decomposed into a set of conventional quantile
regression sub-problems which are convex and can be solved efficiently. This
reformulation leads to new identification results and to fast, easy to
implement, and tuning-free estimators that do not require the availability of
high-level "black box" optimization routines
Accelerated first-order methods for a class of semidefinite programs
This paper introduces a new storage-optimal first-order method (FOM),
CertSDP, for solving a special class of semidefinite programs (SDPs) to high
accuracy. The class of SDPs that we consider, the exact QMP-like SDPs, is
characterized by low-rank solutions, a priori knowledge of the restriction of
the SDP solution to a small subspace, and standard regularity assumptions such
as strict complementarity. Crucially, we show how to use a certificate of
strict complementarity to construct a low-dimensional strongly convex minimax
problem whose optimizer coincides with a factorization of the SDP optimizer.
From an algorithmic standpoint, we show how to construct the necessary
certificate and how to solve the minimax problem efficiently. We accompany our
theoretical results with preliminary numerical experiments suggesting that
CertSDP significantly outperforms current state-of-the-art methods on large
sparse exact QMP-like SDPs
Certifiably Correct Range-Aided SLAM
We present the first algorithm to efficiently compute certifiably optimal
solutions to range-aided simultaneous localization and mapping (RA-SLAM)
problems. Robotic navigation systems increasingly incorporate point-to-point
ranging sensors, leading to state estimation problems in the form of RA-SLAM.
However, the RA-SLAM problem is significantly more difficult to solve than
traditional pose-graph SLAM: ranging sensor models introduce non-convexity and
single range measurements do not uniquely determine the transform between the
involved sensors. As a result, RA-SLAM inference is sensitive to initial
estimates yet lacks reliable initialization techniques. Our approach,
certifiably correct RA-SLAM (CORA), leverages a novel quadratically constrained
quadratic programming (QCQP) formulation of RA-SLAM to relax the RA-SLAM
problem to a semidefinite program (SDP). CORA solves the SDP efficiently using
the Riemannian Staircase methodology; the SDP solution provides both (i) a
lower bound on the RA-SLAM problem's optimal value, and (ii) an approximate
solution of the RA-SLAM problem, which can be subsequently refined using local
optimization. CORA applies to problems with arbitrary pose-pose, pose-landmark,
and ranging measurements and, due to using convex relaxation, is insensitive to
initialization. We evaluate CORA on several real-world problems. In contrast to
state-of-the-art approaches, CORA is able to obtain high-quality solutions on
all problems despite being initialized with random values. Additionally, we
study the tightness of the SDP relaxation with respect to important problem
parameters: the number of (i) robots, (ii) landmarks, and (iii) range
measurements. These experiments demonstrate that the SDP relaxation is often
tight and reveal relationships between graph rigidity and the tightness of the
SDP relaxation.Comment: 17 pages, 9 figures, submitted to T-R
Positive Definite Kernels in Machine Learning
This survey is an introduction to positive definite kernels and the set of
methods they have inspired in the machine learning literature, namely kernel
methods. We first discuss some properties of positive definite kernels as well
as reproducing kernel Hibert spaces, the natural extension of the set of
functions associated with a kernel defined
on a space . We discuss at length the construction of kernel
functions that take advantage of well-known statistical models. We provide an
overview of numerous data-analysis methods which take advantage of reproducing
kernel Hilbert spaces and discuss the idea of combining several kernels to
improve the performance on certain tasks. We also provide a short cookbook of
different kernels which are particularly useful for certain data-types such as
images, graphs or speech segments.Comment: draft. corrected a typo in figure
Fitting Tractable Convex Sets to Support Function Evaluations
The geometric problem of estimating an unknown compact convex set from
evaluations of its support function arises in a range of scientific and
engineering applications. Traditional approaches typically rely on estimators
that minimize the error over all possible compact convex sets; in particular,
these methods do not allow for the incorporation of prior structural
information about the underlying set and the resulting estimates become
increasingly more complicated to describe as the number of measurements
available grows. We address both of these shortcomings by describing a
framework for estimating tractably specified convex sets from support function
evaluations. Building on the literature in convex optimization, our approach is
based on estimators that minimize the error over structured families of convex
sets that are specified as linear images of concisely described sets -- such as
the simplex or the spectraplex -- in a higher-dimensional space that is not
much larger than the ambient space. Convex sets parametrized in this manner are
significant from a computational perspective as one can optimize linear
functionals over such sets efficiently; they serve a different purpose in the
inferential context of the present paper, namely, that of incorporating
regularization in the reconstruction while still offering considerable
expressive power. We provide a geometric characterization of the asymptotic
behavior of our estimators, and our analysis relies on the property that
certain sets which admit semialgebraic descriptions are Vapnik-Chervonenkis
(VC) classes. Our numerical experiments highlight the utility of our framework
over previous approaches in settings in which the measurements available are
noisy or small in number as well as those in which the underlying set to be
reconstructed is non-polyhedral.Comment: 35 pages, 80 figure
- …