104,619 research outputs found
Robust-to-Dynamics Optimization
A robust-to-dynamics optimization (RDO) problem is an optimization problem
specified by two pieces of input: (i) a mathematical program (an objective
function and a feasible set
), and (ii) a dynamical system (a map
). Its goal is to minimize over the
set of initial conditions that forever remain in
under . The focus of this paper is on the case where the
mathematical program is a linear program and the dynamical system is either a
known linear map, or an uncertain linear map that can change over time. In both
cases, we study a converging sequence of polyhedral outer approximations and
(lifted) spectrahedral inner approximations to . Our inner
approximations are optimized with respect to the objective function and
their semidefinite characterization---which has a semidefinite constraint of
fixed size---is obtained by applying polar duality to convex sets that are
invariant under (multiple) linear maps. We characterize three barriers that can
stop convergence of the outer approximations from being finite. We prove that
once these barriers are removed, our inner and outer approximating procedures
find an optimal solution and a certificate of optimality for the RDO problem in
a finite number of steps. Moreover, in the case where the dynamics are linear,
we show that this phenomenon occurs in a number of steps that can be computed
in time polynomial in the bit size of the input data. Our analysis also leads
to a polynomial-time algorithm for RDO instances where the spectral radius of
the linear map is bounded above by any constant less than one. Finally, in our
concluding section, we propose a broader research agenda for studying
optimization problems with dynamical systems constraints, of which RDO is a
special case
Contingency-Constrained Unit Commitment With Intervening Time for System Adjustments
The N-1-1 contingency criterion considers the con- secutive loss of two
components in a power system, with intervening time for system adjustments. In
this paper, we consider the problem of optimizing generation unit commitment
(UC) while ensuring N-1-1 security. Due to the coupling of time periods
associated with consecutive component losses, the resulting problem is a very
large-scale mixed-integer linear optimization model. For efficient solution, we
introduce a novel branch-and-cut algorithm using a temporally decomposed
bilevel separation oracle. The model and algorithm are assessed using multiple
IEEE test systems, and a comprehensive analysis is performed to compare system
performances across different contingency criteria. Computational results
demonstrate the value of considering intervening time for system adjustments in
terms of total cost and system robustness.Comment: 8 pages, 5 figure
Net and Prune: A Linear Time Algorithm for Euclidean Distance Problems
We provide a general framework for getting expected linear time constant
factor approximations (and in many cases FPTAS's) to several well known
problems in Computational Geometry, such as -center clustering and farthest
nearest neighbor. The new approach is robust to variations in the input
problem, and yet it is simple, elegant and practical. In particular, many of
these well studied problems which fit easily into our framework, either
previously had no linear time approximation algorithm, or required rather
involved algorithms and analysis. A short list of the problems we consider
include farthest nearest neighbor, -center clustering, smallest disk
enclosing points, th largest distance, th smallest -nearest
neighbor distance, th heaviest edge in the MST and other spanning forest
type problems, problems involving upward closed set systems, and more. Finally,
we show how to extend our framework such that the linear running time bound
holds with high probability
Polynomial-time Tensor Decompositions with Sum-of-Squares
We give new algorithms based on the sum-of-squares method for tensor
decomposition. Our results improve the best known running times from
quasi-polynomial to polynomial for several problems, including decomposing
random overcomplete 3-tensors and learning overcomplete dictionaries with
constant relative sparsity. We also give the first robust analysis for
decomposing overcomplete 4-tensors in the smoothed analysis model. A key
ingredient of our analysis is to establish small spectral gaps in moment
matrices derived from solutions to sum-of-squares relaxations. To enable this
analysis we augment sum-of-squares relaxations with spectral analogs of maximum
entropy constraints.Comment: to appear in FOCS 201
Near-linear time approximation algorithms for optimal transport via Sinkhorn iteration
Computing optimal transport distances such as the earth mover's distance is a
fundamental problem in machine learning, statistics, and computer vision.
Despite the recent introduction of several algorithms with good empirical
performance, it is unknown whether general optimal transport distances can be
approximated in near-linear time. This paper demonstrates that this ambitious
goal is in fact achieved by Cuturi's Sinkhorn Distances. This result relies on
a new analysis of Sinkhorn iteration, which also directly suggests a new greedy
coordinate descent algorithm, Greenkhorn, with the same theoretical guarantees.
Numerical simulations illustrate that Greenkhorn significantly outperforms the
classical Sinkhorn algorithm in practice
A fast branch-and-prune algorithm for the position analysis of spherical mechanisms
The final publication is available at link.springer.comDifferent branch-and-prune schemes can be found in the literature for numerically solving the position analysis of spherical mechanisms. For the prune operation, they all rely on the propagation of motion intervals. They differ in the way the problem is algebraically formulated. This paper exploits the fact that spherical kinematic loop equations can be formulated as sets of 3 multi-affine polynomials. Multi-affinity has an important impact on how the propagation of motion intervals can be performed because a multi-affine polynomial is uniquely determined by its values at the vertices of a closed hyperbox defined in its domain.Peer ReviewedPostprint (author's final draft
Online Metric-Weighted Linear Representations for Robust Visual Tracking
In this paper, we propose a visual tracker based on a metric-weighted linear
representation of appearance. In order to capture the interdependence of
different feature dimensions, we develop two online distance metric learning
methods using proximity comparison information and structured output learning.
The learned metric is then incorporated into a linear representation of
appearance.
We show that online distance metric learning significantly improves the
robustness of the tracker, especially on those sequences exhibiting drastic
appearance changes. In order to bound growth in the number of training samples,
we design a time-weighted reservoir sampling method.
Moreover, we enable our tracker to automatically perform object
identification during the process of object tracking, by introducing a
collection of static template samples belonging to several object classes of
interest. Object identification results for an entire video sequence are
achieved by systematically combining the tracking information and visual
recognition at each frame. Experimental results on challenging video sequences
demonstrate the effectiveness of the method for both inter-frame tracking and
object identification.Comment: 51 pages. Appearing in IEEE Transactions on Pattern Analysis and
Machine Intelligenc
Sharp analysis of low-rank kernel matrix approximations
We consider supervised learning problems within the positive-definite kernel
framework, such as kernel ridge regression, kernel logistic regression or the
support vector machine. With kernels leading to infinite-dimensional feature
spaces, a common practical limiting difficulty is the necessity of computing
the kernel matrix, which most frequently leads to algorithms with running time
at least quadratic in the number of observations n, i.e., O(n^2). Low-rank
approximations of the kernel matrix are often considered as they allow the
reduction of running time complexities to O(p^2 n), where p is the rank of the
approximation. The practicality of such methods thus depends on the required
rank p. In this paper, we show that in the context of kernel ridge regression,
for approximations based on a random subset of columns of the original kernel
matrix, the rank p may be chosen to be linear in the degrees of freedom
associated with the problem, a quantity which is classically used in the
statistical analysis of such methods, and is often seen as the implicit number
of parameters of non-parametric estimators. This result enables simple
algorithms that have sub-quadratic running time complexity, but provably
exhibit the same predictive performance than existing algorithms, for any given
problem instance, and not only for worst-case situations
- …