2,609 research outputs found

    Quasiconvex Programming

    Full text link
    We define quasiconvex programming, a form of generalized linear programming in which one seeks the point minimizing the pointwise maximum of a collection of quasiconvex functions. We survey algorithms for solving quasiconvex programs either numerically or via generalizations of the dual simplex method from linear programming, and describe varied applications of this geometric optimization technique in meshing, scientific computation, information visualization, automated algorithm analysis, and robust statistics.Comment: 33 pages, 14 figure

    Lower Bounds for Structuring Unreliable Radio Networks

    Full text link
    In this paper, we study lower bounds for randomized solutions to the maximal independent set (MIS) and connected dominating set (CDS) problems in the dual graph model of radio networks---a generalization of the standard graph-based model that now includes unreliable links controlled by an adversary. We begin by proving that a natural geographic constraint on the network topology is required to solve these problems efficiently (i.e., in time polylogarthmic in the network size). We then prove the importance of the assumption that nodes are provided advance knowledge of their reliable neighbors (i.e, neighbors connected by reliable links). Combined, these results answer an open question by proving that the efficient MIS and CDS algorithms from [Censor-Hillel, PODC 2011] are optimal with respect to their dual graph model assumptions. They also provide insight into what properties of an unreliable network enable efficient local computation.Comment: An extended abstract of this work appears in the 2014 proceedings of the International Symposium on Distributed Computing (DISC

    Theoretically Efficient Parallel Graph Algorithms Can Be Fast and Scalable

    Full text link
    There has been significant recent interest in parallel graph processing due to the need to quickly analyze the large graphs available today. Many graph codes have been designed for distributed memory or external memory. However, today even the largest publicly-available real-world graph (the Hyperlink Web graph with over 3.5 billion vertices and 128 billion edges) can fit in the memory of a single commodity multicore server. Nevertheless, most experimental work in the literature report results on much smaller graphs, and the ones for the Hyperlink graph use distributed or external memory. Therefore, it is natural to ask whether we can efficiently solve a broad class of graph problems on this graph in memory. This paper shows that theoretically-efficient parallel graph algorithms can scale to the largest publicly-available graphs using a single machine with a terabyte of RAM, processing them in minutes. We give implementations of theoretically-efficient parallel algorithms for 20 important graph problems. We also present the optimizations and techniques that we used in our implementations, which were crucial in enabling us to process these large graphs quickly. We show that the running times of our implementations outperform existing state-of-the-art implementations on the largest real-world graphs. For many of the problems that we consider, this is the first time they have been solved on graphs at this scale. We have made the implementations developed in this work publicly-available as the Graph-Based Benchmark Suite (GBBS).Comment: This is the full version of the paper appearing in the ACM Symposium on Parallelism in Algorithms and Architectures (SPAA), 201

    Online Independent Set Beyond the Worst-Case: Secretaries, Prophets, and Periods

    Full text link
    We investigate online algorithms for maximum (weight) independent set on graph classes with bounded inductive independence number like, e.g., interval and disk graphs with applications to, e.g., task scheduling and spectrum allocation. In the online setting, it is assumed that nodes of an unknown graph arrive one by one over time. An online algorithm has to decide whether an arriving node should be included into the independent set. Unfortunately, this natural and practically relevant online problem cannot be studied in a meaningful way within a classical competitive analysis as the competitive ratio on worst-case input sequences is lower bounded by Ω(n)\Omega(n). As a worst-case analysis is pointless, we study online independent set in a stochastic analysis. Instead of focussing on a particular stochastic input model, we present a generic sampling approach that enables us to devise online algorithms achieving performance guarantees for a variety of input models. In particular, our analysis covers stochastic input models like the secretary model, in which an adversarial graph is presented in random order, and the prophet-inequality model, in which a randomly generated graph is presented in adversarial order. Our sampling approach bridges thus between stochastic input models of quite different nature. In addition, we show that our approach can be applied to a practically motivated admission control setting. Our sampling approach yields an online algorithm for maximum independent set with competitive ratio O(ρ2)O(\rho^2) with respect to all of the mentioned stochastic input models. for graph classes with inductive independence number ρ\rho. The approach can be extended towards maximum-weight independent set by losing only a factor of O(logn)O(\log n) in the competitive ratio with nn denoting the (expected) number of nodes

    Self-Improving Algorithms

    Full text link
    We investigate ways in which an algorithm can improve its expected performance by fine-tuning itself automatically with respect to an unknown input distribution D. We assume here that D is of product type. More precisely, suppose that we need to process a sequence I_1, I_2, ... of inputs I = (x_1, x_2, ..., x_n) of some fixed length n, where each x_i is drawn independently from some arbitrary, unknown distribution D_i. The goal is to design an algorithm for these inputs so that eventually the expected running time will be optimal for the input distribution D = D_1 * D_2 * ... * D_n. We give such self-improving algorithms for two problems: (i) sorting a sequence of numbers and (ii) computing the Delaunay triangulation of a planar point set. Both algorithms achieve optimal expected limiting complexity. The algorithms begin with a training phase during which they collect information about the input distribution, followed by a stationary regime in which the algorithms settle to their optimized incarnations.Comment: 26 pages, 8 figures, preliminary versions appeared at SODA 2006 and SoCG 2008. Thorough revision to improve the presentation of the pape

    Distributed (Δ+1)(\Delta+1)-Coloring in Sublogarithmic Rounds

    Full text link
    We give a new randomized distributed algorithm for (Δ+1)(\Delta+1)-coloring in the LOCAL model, running in O(logΔ)+2O(loglogn)O(\sqrt{\log \Delta})+ 2^{O(\sqrt{\log \log n})} rounds in a graph of maximum degree~Δ\Delta. This implies that the (Δ+1)(\Delta+1)-coloring problem is easier than the maximal independent set problem and the maximal matching problem, due to their lower bounds of Ω(min(lognloglogn,logΔloglogΔ))\Omega \left( \min \left( \sqrt{\frac{\log n}{\log \log n}}, \frac{\log \Delta}{\log \log \Delta} \right) \right) by Kuhn, Moscibroda, and Wattenhofer [PODC'04]. Our algorithm also extends to list-coloring where the palette of each node contains Δ+1\Delta+1 colors. We extend the set of distributed symmetry-breaking techniques by performing a decomposition of graphs into dense and sparse parts
    corecore