9,148 research outputs found
Space Exploration via Proximity Search
We investigate what computational tasks can be performed on a point set in
, if we are only given black-box access to it via nearest-neighbor
search. This is a reasonable assumption if the underlying point set is either
provided implicitly, or it is stored in a data structure that can answer such
queries. In particular, we show the following: (A) One can compute an
approximate bi-criteria -center clustering of the point set, and more
generally compute a greedy permutation of the point set. (B) One can decide if
a query point is (approximately) inside the convex-hull of the point set.
We also investigate the problem of clustering the given point set, such that
meaningful proximity queries can be carried out on the centers of the clusters,
instead of the whole point set
Monotone and Consistent discretization of the Monge-Ampere operator
We introduce a novel discretization of the Monge-Ampere operator,
simultaneously consistent and degenerate elliptic, hence accurate and robust in
applications. These properties are achieved by exploiting the arithmetic
structure of the discrete domain, assumed to be a two dimensional cartesian
grid. The construction of our scheme is simple, but its analysis relies on
original tools seldom encountered in numerical analysis, such as the geometry
of two dimensional lattices, and an arithmetic structure called the
Stern-Brocot tree. Numerical experiments illustrate the method's efficiency
On the convergence of the affine hull of the Chv\'atal-Gomory closures
Given an integral polyhedron P and a rational polyhedron Q living in the same
n-dimensional space and containing the same integer points as P, we investigate
how many iterations of the Chv\'atal-Gomory closure operator have to be
performed on Q to obtain a polyhedron contained in the affine hull of P. We
show that if P contains an integer point in its relative interior, then such a
number of iterations can be bounded by a function depending only on n. On the
other hand, we prove that if P is not full-dimensional and does not contain any
integer point in its relative interior, then no finite bound on the number of
iterations exists.Comment: 13 pages, 2 figures - the introduction has been extended and an extra
chapter has been adde
Convex Hull of Points Lying on Lines in o(n log n) Time after Preprocessing
Motivated by the desire to cope with data imprecision, we study methods for
taking advantage of preliminary information about point sets in order to speed
up the computation of certain structures associated with them.
In particular, we study the following problem: given a set L of n lines in
the plane, we wish to preprocess L such that later, upon receiving a set P of n
points, each of which lies on a distinct line of L, we can construct the convex
hull of P efficiently. We show that in quadratic time and space it is possible
to construct a data structure on L that enables us to compute the convex hull
of any such point set P in O(n alpha(n) log* n) expected time. If we further
assume that the points are "oblivious" with respect to the data structure, the
running time improves to O(n alpha(n)). The analysis applies almost verbatim
when L is a set of line-segments, and yields similar asymptotic bounds. We
present several extensions, including a trade-off between space and query time
and an output-sensitive algorithm. We also study the "dual problem" where we
show how to efficiently compute the (<= k)-level of n lines in the plane, each
of which lies on a distinct point (given in advance).
We complement our results by Omega(n log n) lower bounds under the algebraic
computation tree model for several related problems, including sorting a set of
points (according to, say, their x-order), each of which lies on a given line
known in advance. Therefore, the convex hull problem under our setting is
easier than sorting, contrary to the "standard" convex hull and sorting
problems, in which the two problems require Theta(n log n) steps in the worst
case (under the algebraic computation tree model).Comment: 26 pages, 5 figures, 1 appendix; a preliminary version appeared at
SoCG 201
On the Equivalence between Herding and Conditional Gradient Algorithms
We show that the herding procedure of Welling (2009) takes exactly the form
of a standard convex optimization algorithm--namely a conditional gradient
algorithm minimizing a quadratic moment discrepancy. This link enables us to
invoke convergence results from convex optimization and to consider faster
alternatives for the task of approximating integrals in a reproducing kernel
Hilbert space. We study the behavior of the different variants through
numerical simulations. The experiments indicate that while we can improve over
herding on the task of approximating integrals, the original herding algorithm
tends to approach more often the maximum entropy distribution, shedding more
light on the learning bias behind herding
- …