2,970 research outputs found

    Best of Two Local Models: Local Centralized and Local Distributed Algorithms

    Full text link
    We consider two models of computation: centralized local algorithms and local distributed algorithms. Algorithms in one model are adapted to the other model to obtain improved algorithms. Distributed vertex coloring is employed to design improved centralized local algorithms for: maximal independent set, maximal matching, and an approximation scheme for maximum (weighted) matching over bounded degree graphs. The improvement is threefold: the algorithms are deterministic, stateless, and the number of probes grows polynomially in logn\log^* n, where nn is the number of vertices of the input graph. The recursive centralized local improvement technique by Nguyen and Onak~\cite{onak2008} is employed to obtain an improved distributed approximation scheme for maximum (weighted) matching. The improvement is twofold: we reduce the number of rounds from O(logn)O(\log n) to O(logn)O(\log^*n) for a wide range of instances and, our algorithms are deterministic rather than randomized

    Distributed Maximum Matching in Bounded Degree Graphs

    Full text link
    We present deterministic distributed algorithms for computing approximate maximum cardinality matchings and approximate maximum weight matchings. Our algorithm for the unweighted case computes a matching whose size is at least (1-\eps) times the optimal in \Delta^{O(1/\eps)} + O\left(\frac{1}{\eps^2}\right) \cdot\log^*(n) rounds where nn is the number of vertices in the graph and Δ\Delta is the maximum degree. Our algorithm for the edge-weighted case computes a matching whose weight is at least (1-\eps) times the optimal in \log(\min\{1/\wmin,n/\eps\})^{O(1/\eps)}\cdot(\Delta^{O(1/\eps)}+\log^*(n)) rounds for edge-weights in [\wmin,1]. The best previous algorithms for both the unweighted case and the weighted case are by Lotker, Patt-Shamir, and Pettie~(SPAA 2008). For the unweighted case they give a randomized (1-\eps)-approximation algorithm that runs in O((\log(n)) /\eps^3) rounds. For the weighted case they give a randomized (1/2-\eps)-approximation algorithm that runs in O(\log(\eps^{-1}) \cdot \log(n)) rounds. Hence, our results improve on the previous ones when the parameters Δ\Delta, \eps and \wmin are constants (where we reduce the number of runs from O(log(n))O(\log(n)) to O(log(n))O(\log^*(n))), and more generally when Δ\Delta, 1/\eps and 1/\wmin are sufficiently slowly increasing functions of nn. Moreover, our algorithms are deterministic rather than randomized.Comment: arXiv admin note: substantial text overlap with arXiv:1402.379

    Quantum and Classical Strong Direct Product Theorems and Optimal Time-Space Tradeoffs

    Full text link
    A strong direct product theorem says that if we want to compute k independent instances of a function, using less than k times the resources needed for one instance, then our overall success probability will be exponentially small in k. We establish such theorems for the classical as well as quantum query complexity of the OR function. This implies slightly weaker direct product results for all total functions. We prove a similar result for quantum communication protocols computing k instances of the Disjointness function. Our direct product theorems imply a time-space tradeoff T^2*S=Omega(N^3) for sorting N items on a quantum computer, which is optimal up to polylog factors. They also give several tight time-space and communication-space tradeoffs for the problems of Boolean matrix-vector multiplication and matrix multiplication.Comment: 22 pages LaTeX. 2nd version: some parts rewritten, results are essentially the same. A shorter version will appear in IEEE FOCS 0

    Inapproximability of Truthful Mechanisms via Generalizations of the VC Dimension

    Full text link
    Algorithmic mechanism design (AMD) studies the delicate interplay between computational efficiency, truthfulness, and optimality. We focus on AMD's paradigmatic problem: combinatorial auctions. We present a new generalization of the VC dimension to multivalued collections of functions, which encompasses the classical VC dimension, Natarajan dimension, and Steele dimension. We present a corresponding generalization of the Sauer-Shelah Lemma and harness this VC machinery to establish inapproximability results for deterministic truthful mechanisms. Our results essentially unify all inapproximability results for deterministic truthful mechanisms for combinatorial auctions to date and establish new separation gaps between truthful and non-truthful algorithms

    A framework for space-efficient string kernels

    Full text link
    String kernels are typically used to compare genome-scale sequences whose length makes alignment impractical, yet their computation is based on data structures that are either space-inefficient, or incur large slowdowns. We show that a number of exact string kernels, like the kk-mer kernel, the substrings kernels, a number of length-weighted kernels, the minimal absent words kernel, and kernels with Markovian corrections, can all be computed in O(nd)O(nd) time and in o(n)o(n) bits of space in addition to the input, using just a rangeDistinct\mathtt{rangeDistinct} data structure on the Burrows-Wheeler transform of the input strings, which takes O(d)O(d) time per element in its output. The same bounds hold for a number of measures of compositional complexity based on multiple value of kk, like the kk-mer profile and the kk-th order empirical entropy, and for calibrating the value of kk using the data

    On active and passive testing

    Full text link
    Given a property of Boolean functions, what is the minimum number of queries required to determine with high probability if an input function satisfies this property or is "far" from satisfying it? This is a fundamental question in Property Testing, where traditionally the testing algorithm is allowed to pick its queries among the entire set of inputs. Balcan, Blais, Blum and Yang have recently suggested to restrict the tester to take its queries from a smaller random subset of polynomial size of the inputs. This model is called active testing, and in the extreme case when the size of the set we can query from is exactly the number of queries performed it is known as passive testing. We prove that passive or active testing of k-linear functions (that is, sums of k variables among n over Z_2) requires Theta(k*log n) queries, assuming k is not too large. This extends the case k=1, (that is, dictator functions), analyzed by Balcan et. al. We also consider other classes of functions including low degree polynomials, juntas, and partially symmetric functions. Our methods combine algebraic, combinatorial, and probabilistic techniques, including the Talagrand concentration inequality and the Erdos--Rado theorem on Delta-systems.Comment: 16 page

    Searching edges in the overlap of two plane graphs

    Full text link
    Consider a pair of plane straight-line graphs, whose edges are colored red and blue, respectively, and let n be the total complexity of both graphs. We present a O(n log n)-time O(n)-space technique to preprocess such pair of graphs, that enables efficient searches among the red-blue intersections along edges of one of the graphs. Our technique has a number of applications to geometric problems. This includes: (1) a solution to the batched red-blue search problem [Dehne et al. 2006] in O(n log n) queries to the oracle; (2) an algorithm to compute the maximum vertical distance between a pair of 3D polyhedral terrains one of which is convex in O(n log n) time, where n is the total complexity of both terrains; (3) an algorithm to construct the Hausdorff Voronoi diagram of a family of point clusters in the plane in O((n+m) log^3 n) time and O(n+m) space, where n is the total number of points in all clusters and m is the number of crossings between all clusters; (4) an algorithm to construct the farthest-color Voronoi diagram of the corners of n axis-aligned rectangles in O(n log^2 n) time; (5) an algorithm to solve the stabbing circle problem for n parallel line segments in the plane in optimal O(n log n) time. All these results are new or improve on the best known algorithms.Comment: 22 pages, 6 figure
    corecore