3,755 research outputs found
The covert set-cover problem with application to Network Discovery
We address a version of the set-cover problem where we do not know the sets
initially (and hence referred to as covert) but we can query an element to find
out which sets contain this element as well as query a set to know the
elements. We want to find a small set-cover using a minimal number of such
queries. We present a Monte Carlo randomized algorithm that approximates an
optimal set-cover of size within factor with high probability
using queries where is the input size.
We apply this technique to the network discovery problem that involves
certifying all the edges and non-edges of an unknown -vertices graph based
on layered-graph queries from a minimal number of vertices. By reducing it to
the covert set-cover problem we present an -competitive Monte
Carlo randomized algorithm for the covert version of network discovery problem.
The previously best known algorithm has a competitive ratio of and therefore our result achieves an exponential improvement
A Unified Approach to Tail Estimates for Randomized Incremental Construction
By combining several interesting applications of random sampling in geometric algorithms like point location, linear programming, segment intersections, binary space partitioning, Clarkson and Shor [Kenneth L. Clarkson and Peter W. Shor, 1989] developed a general framework of randomized incremental construction (RIC ). The basic idea is to add objects in a random order and show that this approach yields efficient/optimal bounds on expected running time. Even quicksort can be viewed as a special case of this paradigm. However, unlike quicksort, for most of these problems, sharper tail estimates on their running times are not known. Barring some promising attempts in [Kurt Mehlhorn et al., 1993; Kenneth L. Clarkson et al., 1992; Raimund Seidel, 1991], the general question remains unresolved.
In this paper we present a general technique to obtain tail estimates for RIC and and provide applications to some fundamental problems like Delaunay triangulations and construction of Visibility maps of intersecting line segments. The main result of the paper is derived from a new and careful application of Freedman\u27s [David Freedman, 1975] inequality for Martingale concentration that overcomes the bottleneck of the better known Azuma-Hoeffding inequality. Further, we explore instances, where an RIC based algorithm may not have inverse polynomial tail estimates. In particular, we show that the RIC time bounds for trapezoidal map can encounter a running time of Omega (n log n log log n) with probability exceeding 1/(sqrt{n)}. This rules out inverse polynomial concentration bounds within a constant factor of the O(n log n) expected running time
Towards a theory of cache-efficient algorithms
We describe a model that enables us to analyze the running time of an algorithm in a computer with a memory hierarchy with limited associativity, in terms of various cache parameters. Our model, an extension of Aggarwal and Vitter's I/O model, enables us to establish useful relationships between the cache complexity and the I/O complexity of computations. As a corollary, we obtain cache-optimal algorithms for some fundamental problems like sorting, FFT, and an important subclass of permutations in the single-level cache model. We also show that ignoring associativity concerns could lead to inferior performance, by analyzing the average-case cache behavior of mergesort. We further extend our model to multiple levels of cache with limited associativity and present optimal algorithms for matrix transpose and sorting. Our techniques may be used for systematic exploitation of the memory hierarchy starting from the algorithm design stage, and dealing with the hitherto unresolved problem of limited associativity
- …