722 research outputs found
Polynomial-Time Amoeba Neighborhood Membership and Faster Localized Solving
We derive efficient algorithms for coarse approximation of algebraic
hypersurfaces, useful for estimating the distance between an input polynomial
zero set and a given query point. Our methods work best on sparse polynomials
of high degree (in any number of variables) but are nevertheless completely
general. The underlying ideas, which we take the time to describe in an
elementary way, come from tropical geometry. We thus reduce a hard algebraic
problem to high-precision linear optimization, proving new upper and lower
complexity estimates along the way.Comment: 15 pages, 9 figures. Submitted to a conference proceeding
The Convex Hull Problem in Practice : Improving the Running Time of the Double Description Method
The double description method is a simple but widely used algorithm for computation of extreme points in polyhedral sets. One key aspect of its implementation is the question of how to efficiently test extreme points for adjacency. In this dissertation, two significant contributions related to adjacency testing are presented. First, the currently used data structures are revisited and various optimizations are proposed. Empirical evidence is provided to demonstrate their competitiveness. Second, a new adjacency test is introduced. It is a refinement of the well known algebraic test featuring a technique for avoiding redundant computations. Its correctness is formally proven. Its superiority in multiple degenerate scenarios is demonstrated through experimental results. Parallel computation is one further aspect of the double description method covered in this work. A recently introduced divide-and-conquer technique is revisited and considerable practical limitations are demonstrated
Approximate Nearest-Neighbor Search for Line Segments
Approximate nearest-neighbor search is a fundamental algorithmic problem that
continues to inspire study due its essential role in numerous contexts. In
contrast to most prior work, which has focused on point sets, we consider
nearest-neighbor queries against a set of line segments in , for
constant dimension . Given a set of disjoint line segments in
and an error parameter , the objective is to
build a data structure such that for any query point , it is possible to
return a line segment whose Euclidean distance from is at most
times the distance from to its nearest line segment. We
present a data structure for this problem with storage and query time , where is the spread of the set of
segments . Our approach is based on a covering of space by anisotropic
elements, which align themselves according to the orientations of nearby
segments.Comment: 20 pages (including appendix), 5 figure
Complexity of optimizing over the integers
In the first part of this paper, we present a unified framework for analyzing
the algorithmic complexity of any optimization problem, whether it be
continuous or discrete in nature. This helps to formalize notions like "input",
"size" and "complexity" in the context of general mathematical optimization,
avoiding context dependent definitions which is one of the sources of
difference in the treatment of complexity within continuous and discrete
optimization. In the second part of the paper, we employ the language developed
in the first part to study information theoretic and algorithmic complexity of
{\em mixed-integer convex optimization}, which contains as a special case
continuous convex optimization on the one hand and pure integer optimization on
the other. We strive for the maximum possible generality in our exposition.
We hope that this paper contains material that both continuous optimizers and
discrete optimizers find new and interesting, even though almost all of the
material presented is common knowledge in one or the other community. We see
the main merit of this paper as bringing together all of this information under
one unifying umbrella with the hope that this will act as yet another catalyst
for more interaction across the continuous-discrete divide. In fact, our
motivation behind Part I of the paper is to provide a common language for both
communities
Nonlinear Integer Programming
Research efforts of the past fifty years have led to a development of linear
integer programming as a mature discipline of mathematical optimization. Such a
level of maturity has not been reached when one considers nonlinear systems
subject to integrality requirements for the variables. This chapter is
dedicated to this topic.
The primary goal is a study of a simple version of general nonlinear integer
problems, where all constraints are still linear. Our focus is on the
computational complexity of the problem, which varies significantly with the
type of nonlinear objective function in combination with the underlying
combinatorial structure. Numerous boundary cases of complexity emerge, which
sometimes surprisingly lead even to polynomial time algorithms.
We also cover recent successful approaches for more general classes of
problems. Though no positive theoretical efficiency results are available, nor
are they likely to ever be available, these seem to be the currently most
successful and interesting approaches for solving practical problems.
It is our belief that the study of algorithms motivated by theoretical
considerations and those motivated by our desire to solve practical instances
should and do inform one another. So it is with this viewpoint that we present
the subject, and it is in this direction that we hope to spark further
research.Comment: 57 pages. To appear in: M. J\"unger, T. Liebling, D. Naddef, G.
Nemhauser, W. Pulleyblank, G. Reinelt, G. Rinaldi, and L. Wolsey (eds.), 50
Years of Integer Programming 1958--2008: The Early Years and State-of-the-Art
Surveys, Springer-Verlag, 2009, ISBN 354068274
Lower Bounds on the Oracle Complexity of Nonsmooth Convex Optimization via Information Theory
We present an information-theoretic approach to lower bound the oracle
complexity of nonsmooth black box convex optimization, unifying previous lower
bounding techniques by identifying a combinatorial problem, namely string
guessing, as a single source of hardness. As a measure of complexity we use
distributional oracle complexity, which subsumes randomized oracle complexity
as well as worst-case oracle complexity. We obtain strong lower bounds on
distributional oracle complexity for the box , as well as for the
-ball for (for both low-scale and large-scale regimes),
matching worst-case upper bounds, and hence we close the gap between
distributional complexity, and in particular, randomized complexity, and
worst-case complexity. Furthermore, the bounds remain essentially the same for
high-probability and bounded-error oracle complexity, and even for combination
of the two, i.e., bounded-error high-probability oracle complexity. This
considerably extends the applicability of known bounds
- …