55,233 research outputs found
Data-Collection for the Sloan Digital Sky Survey: a Network-Flow Heuristic
The goal of the Sloan Digital Sky Survey is ``to map in detail one-quarter of
the entire sky, determining the positions and absolute brightnesses of more
than 100 million celestial objects''. The survey will be performed by taking
``snapshots'' through a large telescope. Each snapshot can capture up to 600
objects from a small circle of the sky. This paper describes the design and
implementation of the algorithm that is being used to determine the snapshots
so as to minimize their number. The problem is NP-hard in general; the
algorithm described is a heuristic, based on Lagriangian-relaxation and
min-cost network flow. It gets within 5-15% of a naive lower bound, whereas
using a ``uniform'' cover only gets within 25-35%.Comment: proceedings version appeared in ACM-SIAM Symposium on Discrete
Algorithms (1998
Engineering Art Galleries
The Art Gallery Problem is one of the most well-known problems in
Computational Geometry, with a rich history in the study of algorithms,
complexity, and variants. Recently there has been a surge in experimental work
on the problem. In this survey, we describe this work, show the chronology of
developments, and compare current algorithms, including two unpublished
versions, in an exhaustive experiment. Furthermore, we show what core
algorithmic ingredients have led to recent successes
A Numerical Slow Manifold Approach to Model Reduction for Optimal Control of Multiple Time Scale ODE
Time scale separation is a natural property of many control systems that can
be ex- ploited, theoretically and numerically. We present a numerical scheme to
solve optimal control problems with considerable time scale separation that is
based on a model reduction approach that does not need the system to be
explicitly stated in singularly perturbed form. We present examples that
highlight the advantages and disadvantages of the method
Branch-and-Prune Search Strategies for Numerical Constraint Solving
When solving numerical constraints such as nonlinear equations and
inequalities, solvers often exploit pruning techniques, which remove redundant
value combinations from the domains of variables, at pruning steps. To find the
complete solution set, most of these solvers alternate the pruning steps with
branching steps, which split each problem into subproblems. This forms the
so-called branch-and-prune framework, well known among the approaches for
solving numerical constraints. The basic branch-and-prune search strategy that
uses domain bisections in place of the branching steps is called the bisection
search. In general, the bisection search works well in case (i) the solutions
are isolated, but it can be improved further in case (ii) there are continuums
of solutions (this often occurs when inequalities are involved). In this paper,
we propose a new branch-and-prune search strategy along with several variants,
which not only allow yielding better branching decisions in the latter case,
but also work as well as the bisection search does in the former case. These
new search algorithms enable us to employ various pruning techniques in the
construction of inner and outer approximations of the solution set. Our
experiments show that these algorithms speed up the solving process often by
one order of magnitude or more when solving problems with continuums of
solutions, while keeping the same performance as the bisection search when the
solutions are isolated.Comment: 43 pages, 11 figure
On the coupling between an ideal fluid and immersed particles
In this paper we use Lagrange-Poincare reduction to understand the coupling
between a fluid and a set of Lagrangian particles that are supposed to simulate
it. In particular, we reinterpret the work of Cendra et al. by substituting
velocity interpolation from particle velocities for their principal connection.
The consequence of writing evolution equations in terms of interpolation is
two-fold. First, it gives estimates on the error incurred when interpolation is
used to derive the evolution of the system. Second, this form of the equations
of motion can inspire a family of particle and hybrid particle-spectral methods
where the error analysis is "built-in". We also discuss the influence of other
parameters attached to the particles, such as shape, orientation, or
higher-order deformations, and how they can help with conservation of momenta
in the sense of Kelvin's circulation theorem.Comment: to appear in Physica D, comments and questions welcom
The Bane of Low-Dimensionality Clustering
In this paper, we give a conditional lower bound of on
running time for the classic k-median and k-means clustering objectives (where
n is the size of the input), even in low-dimensional Euclidean space of
dimension four, assuming the Exponential Time Hypothesis (ETH). We also
consider k-median (and k-means) with penalties where each point need not be
assigned to a center, in which case it must pay a penalty, and extend our lower
bound to at least three-dimensional Euclidean space.
This stands in stark contrast to many other geometric problems such as the
traveling salesman problem, or computing an independent set of unit spheres.
While these problems benefit from the so-called (limited) blessing of
dimensionality, as they can be solved in time or
in d dimensions, our work shows that widely-used clustering
objectives have a lower bound of , even in dimension four.
We complete the picture by considering the two-dimensional case: we show that
there is no algorithm that solves the penalized version in time less than
, and provide a matching upper bound of .
The main tool we use to establish these lower bounds is the placement of
points on the moment curve, which takes its inspiration from constructions of
point sets yielding Delaunay complexes of high complexity
Towards Tight Bounds for the Streaming Set Cover Problem
We consider the classic Set Cover problem in the data stream model. For
elements and sets () we give a -pass algorithm with a
strongly sub-linear space and logarithmic
approximation factor. This yields a significant improvement over the earlier
algorithm of Demaine et al. [DIMV14] that uses exponentially larger number of
passes. We complement this result by showing that the tradeoff between the
number of passes and space exhibited by our algorithm is tight, at least when
the approximation factor is equal to . Specifically, we show that any
algorithm that computes set cover exactly using passes
must use space in the regime of .
Furthermore, we consider the problem in the geometric setting where the
elements are points in and sets are either discs, axis-parallel
rectangles, or fat triangles in the plane, and show that our algorithm (with a
slight modification) uses the optimal space to find a
logarithmic approximation in passes.
Finally, we show that any randomized one-pass algorithm that distinguishes
between covers of size 2 and 3 must use a linear (i.e., ) amount of
space. This is the first result showing that a randomized, approximate
algorithm cannot achieve a space bound that is sublinear in the input size.
This indicates that using multiple passes might be necessary in order to
achieve sub-linear space bounds for this problem while guaranteeing small
approximation factors.Comment: A preliminary version of this paper is to appear in PODS 201
- âŠ