2 research outputs found

    On the Probe Complexity of Local Computation Algorithms

    Get PDF
    In the Local Computation Algorithms (LCA) model, the algorithm is asked to compute a part of the output by reading as little as possible from the input. For example, an LCA for coloring a graph is given a vertex name (as a "query"), and it should output the color assigned to that vertex after inquiring about some part of the graph topology using "probes"; all outputs must be consistent with the same coloring. LCAs are useful when the input is huge, and the output as a whole is not needed simultaneously. Most previous work on LCAs was limited to bounded-degree graphs, which seems inevitable because probes are of the form "what vertex is at the other end of edge i of vertex v?". In this work we study LCAs for unbounded-degree graphs. In particular, such LCAs are expected to probe the graph a number of times that is significantly smaller than the maximum, average, or even minimum degree. We show that there are problems that have very efficient LCAs on any graph - specifically, we show that there is an LCA for the weak coloring problem (where a coloring is legal if every vertex has a neighbor with a different color) that uses log^* n+O(1) probes to reply to any query. As another way of dealing with large degrees, we propose a more powerful type of probe which we call a strong probe: given a vertex name, it returns a list of its neighbors. Lower bounds for strong probes are stronger than ones in the edge probe model (which we call weak probes). Our main result in this model is that roughly Omega(sqrt{n}) strong probes are required to compute a maximal matching. Our findings include interesting separations between closely related problems. For weak probes, we show that while weak 3-coloring can be done with probe complexity log^* n+O(1), weak 2-coloring has probe complexity Omega(log n/log log n). For strong probes, our negative result for maximal matching is complemented by an LCA for (1-epsilon)-approximate maximum matching on regular graphs that uses O(1) strong probes, for any constant epsilon>0

    Logarithmic Communication for Distributed Optimization in Multi-Agent Systems

    Get PDF
    Classically, the design of multi-agent systems is approached using techniques from distributed optimization such as dual descent and consensus algorithms. Such algorithms depend on convergence to global consensus before any individual agent can determine its local action. This leads to challenges with respect to communication overhead and robustness, and improving algorithms with respect to these measures has been a focus of the community for decades. This paper presents a new approach for multi-agent system design based on ideas from the emerging field of local computation algorithms. The framework we develop, LOcal Convex Optimization (LOCO), is the first local computation algorithm for convex optimization problems and can be applied in a wide-variety of settings. We demonstrate the generality of the framework via applications to Network Utility Maximization (NUM) and the distributed training of Support Vector Machines (SVMs), providing numerical results illustrating the improvement compared to classical distributed optimization approaches in each case
    corecore