62,066 research outputs found

    Multiple sink positioning in sensor networks

    Get PDF
    We study the problem of positioning multiple sinks, or data collection stops for a mobile sink, in a sensor network field. Given a sensor network represented by a unit disc graph G = ( V, E ), we say a set of points U (sink node locations) is an h -hop covering set for G if every node in G is at most h hops away from some point in U . Placing sink nodes at the points of a covering set guarantees that every sensor node has a short path to some sink node. This can increase network lifetime, reduce the occurrence of errors, and reduce latency. We also study variations of the problem where the sink locations are restricted to be at points of a regular lattice (lattice-based covering set), or at network nodes (graph-based covering set). We give the first polynomial time approximation scheme ( PTAS ) for the h -hop covering set problem, the h -hop lattice-based covering set problem, and the h-hop graph-based covering set problem. We give a new PTAS for the lattice-based disc cover problem, based on a new approach deriving from recent results on dominating sets in unit disc graphs. We show that this gives a (3 + f)-approximation algorithm for the disc cover problem, and gives the first distributed algorithm for this problem. We give a (5 + f)-approximation algorithm for the h -hop covering set problem in unit disc graphs, that does not require a geometric representation of the graph. Finally, we give a (3+f)-approximation algorithm for the h -hop covering set problem for unit disc graphs that runs in time quadratic in the number of nodes in the graph, for any constant f and h . In addition to showing how well a lattice-based approach for a disc cover problem approximates the optimal solution, we prove a geometric theorem that gives an exact relationship between the side of a triangular lattice and the number of lattice discs that are necessary and sufficient to cover an arbitrary disc on the plane

    Curse of dimensionality reduction in max-plus based approximation methods: theoretical estimates and improved pruning algorithms

    Full text link
    Max-plus based methods have been recently developed to approximate the value function of possibly high dimensional optimal control problems. A critical step of these methods consists in approximating a function by a supremum of a small number of functions (max-plus "basis functions") taken from a prescribed dictionary. We study several variants of this approximation problem, which we show to be continuous versions of the facility location and kk-center combinatorial optimization problems, in which the connection costs arise from a Bregman distance. We give theoretical error estimates, quantifying the number of basis functions needed to reach a prescribed accuracy. We derive from our approach a refinement of the curse of dimensionality free method introduced previously by McEneaney, with a higher accuracy for a comparable computational cost.Comment: 8pages 5 figure

    The parallel approximability of a subclass of quadratic programming

    Get PDF
    In this paper we deal with the parallel approximability of a special class of Quadratic Programming (QP), called Smooth Positive Quadratic Programming. This subclass of QP is obtained by imposing restrictions on the coefficients of the QP instance. The Smoothness condition restricts the magnitudes of the coefficients while the positiveness requires that all the coefficients be non-negative. Interestingly, even with these restrictions several combinatorial problems can be modeled by Smooth QP. We show NC Approximation Schemes for the instances of Smooth Positive QP. This is done by reducing the instance of QP to an instance of Positive Linear Programming, finding in NC an approximate fractional solution to the obtained program, and then rounding the fractional solution to an integer approximate solution for the original problem. Then we show how to extend the result for positive instances of bounded degree to Smooth Integer Programming problems. Finally, we formulate several important combinatorial problems as Positive Quadratic Programs (or Positive Integer Programs) in packing/covering form and show that the techniques presented can be used to obtain NC Approximation Schemes for "dense" instances of such problems.Peer ReviewedPostprint (published version

    Approximate Convex Optimization by Online Game Playing

    Full text link
    Lagrangian relaxation and approximate optimization algorithms have received much attention in the last two decades. Typically, the running time of these methods to obtain a ϵ\epsilon approximate solution is proportional to 1ϵ2\frac{1}{\epsilon^2}. Recently, Bienstock and Iyengar, following Nesterov, gave an algorithm for fractional packing linear programs which runs in 1ϵ\frac{1}{\epsilon} iterations. The latter algorithm requires to solve a convex quadratic program every iteration - an optimization subroutine which dominates the theoretical running time. We give an algorithm for convex programs with strictly convex constraints which runs in time proportional to 1ϵ\frac{1}{\epsilon}. The algorithm does NOT require to solve any quadratic program, but uses gradient steps and elementary operations only. Problems which have strictly convex constraints include maximum entropy frequency estimation, portfolio optimization with loss risk constraints, and various computational problems in signal processing. As a side product, we also obtain a simpler version of Bienstock and Iyengar's result for general linear programming, with similar running time. We derive these algorithms using a new framework for deriving convex optimization algorithms from online game playing algorithms, which may be of independent interest

    Approximating Edit Distance Within Constant Factor in Truly Sub-Quadratic Time

    Full text link
    Edit distance is a measure of similarity of two strings based on the minimum number of character insertions, deletions, and substitutions required to transform one string into the other. The edit distance can be computed exactly using a dynamic programming algorithm that runs in quadratic time. Andoni, Krauthgamer and Onak (2010) gave a nearly linear time algorithm that approximates edit distance within approximation factor poly(logn)\text{poly}(\log n). In this paper, we provide an algorithm with running time O~(n22/7)\tilde{O}(n^{2-2/7}) that approximates the edit distance within a constant factor
    corecore