36,011 research outputs found

    Edge-supported approximate analysis for long running computations

    Get PDF
    With the increasing availability of Internet of Things (IoT) devices, and potential applications that make use of data from such devices, there is a need to better identify appropriate data processing techniques that can be applied to this data. The computational complexity of these applications, and the complexity of the requirements on the data processing techniques, often derives from the capabilities of current IoT devices and the need to integrate data streams across multiple IoT devices, which result in larger data sizes and loads on the computing infrastructure. Furthermore, due to the dynamics and uncertainties of edge environments, it is essential that these techniques are capable of adapting across a range of computational and data transfer requirements (such as execution performance) and infrastructure scales (processing nodes, storage needs, network requirements) to carry out a particular analysis task, in response to changing requirements and constraints. Approximate computing offers techniques that can simplify the overall analysis workflow, trading off loss in quality and optimality of the solution with time to reach a particular outcome. These techniques have two main advantages: (i) reduced time to execute a particular data analysis; (ii) reduced requirements on the computational infrastructure (i.e., lower energy, computational resource needs, etc) to carry out such analysis. With data processing capabilities available IoT devices and associated gateway nodes, such approximate computing can be achieved at or close to the network edge. In this paper, we propose in-transit and edge-supported approximation techniques, which can undertake partial/approximate data processing at the data generation/capture or aggregation site, prior to delivery to a cloud data center. We also demonstrate how such an approach can be used in practice by applying it to support energy optimization in built environments (utilizing a combination of sensors and cloud-based data analysis). Several approximation techniques that are relevant in this context are presented, and their relevance explored and evaluated in the context of an energy simulation application scenario

    Low Diameter Graph Decompositions by Approximate Distance Computation

    Get PDF
    In many models for large-scale computation, decomposition of the problem is key to efficient algorithms. For distance-related graph problems, it is often crucial that such a decomposition results in clusters of small diameter, while the probability that an edge is cut by the decomposition scales linearly with the length of the edge. There is a large body of literature on low diameter graph decomposition with small edge cutting probabilities, with all existing techniques heavily building on single source shortest paths (SSSP) computations. Unfortunately, in many theoretical models for large-scale computations, the SSSP task constitutes a complexity bottleneck. Therefore, it is desirable to replace exact SSSP computations with approximate ones. However this imposes a fundamental challenge since the existing constructions of low diameter graph decomposition with small edge cutting probabilities inherently rely on the subtractive form of the triangle inequality, which fails to hold under distance approximation. The current paper overcomes this obstacle by developing a technique termed blurry ball growing. By combining this technique with a clever algorithmic idea of Miller et al. (SPAA 2013), we obtain a construction of low diameter decompositions with small edge cutting probabilities which replaces exact SSSP computations by (a small number of) approximate ones. The utility of our approach is showcased by deriving efficient algorithms that work in the CONGEST, PRAM, and semi-streaming models of computation. As an application, we obtain metric tree embedding algorithms in the vein of Bartal (FOCS 1996) whose computational complexities in these models are optimal up to polylogarithmic factors. Our embeddings have the additional useful property that the tree can be mapped back to the original graph such that each edge is "used" only logaritmically many times, which is of interest for capacitated problems and simulating CONGEST algorithms on the tree into which the graph is embedded

    Fast Local Computation Algorithms

    Full text link
    For input xx, let F(x)F(x) denote the set of outputs that are the "legal" answers for a computational problem FF. Suppose xx and members of F(x)F(x) are so large that there is not time to read them in their entirety. We propose a model of {\em local computation algorithms} which for a given input xx, support queries by a user to values of specified locations yiy_i in a legal output y∈F(x)y \in F(x). When more than one legal output yy exists for a given xx, the local computation algorithm should output in a way that is consistent with at least one such yy. Local computation algorithms are intended to distill the common features of several concepts that have appeared in various algorithmic subfields, including local distributed computation, local algorithms, locally decodable codes, and local reconstruction. We develop a technique, based on known constructions of small sample spaces of kk-wise independent random variables and Beck's analysis in his algorithmic approach to the Lov{\'{a}}sz Local Lemma, which under certain conditions can be applied to construct local computation algorithms that run in {\em polylogarithmic} time and space. We apply this technique to maximal independent set computations, scheduling radio network broadcasts, hypergraph coloring and satisfying kk-SAT formulas.Comment: A preliminary version of this paper appeared in ICS 2011, pp. 223-23

    Fast and Deterministic Approximations for k-Cut

    Get PDF
    In an undirected graph, a k-cut is a set of edges whose removal breaks the graph into at least k connected components. The minimum weight k-cut can be computed in n^O(k) time, but when k is treated as part of the input, computing the minimum weight k-cut is NP-Hard [Goldschmidt and Hochbaum, 1994]. For poly(m,n,k)-time algorithms, the best possible approximation factor is essentially 2 under the small set expansion hypothesis [Manurangsi, 2017]. Saran and Vazirani [1995] showed that a (2 - 2/k)-approximately minimum weight k-cut can be computed via O(k) minimum cuts, which implies a O~(km) randomized running time via the nearly linear time randomized min-cut algorithm of Karger [2000]. Nagamochi and Kamidoi [2007] showed that a (2 - 2/k)-approximately minimum weight k-cut can be computed deterministically in O(mn + n^2 log n) time. These results prompt two basic questions. The first concerns the role of randomization. Is there a deterministic algorithm for 2-approximate k-cuts matching the randomized running time of O~(km)? The second question qualitatively compares minimum cut to 2-approximate minimum k-cut. Can 2-approximate k-cuts be computed as fast as the minimum cut - in O~(m) randomized time? We give a deterministic approximation algorithm that computes (2 + eps)-minimum k-cuts in O(m log^3 n / eps^2) time, via a (1 + eps)-approximation for an LP relaxation of k-cut

    The Query Complexity of Correlated Equilibria

    Full text link
    We consider the complexity of finding a correlated equilibrium of an nn-player game in a model that allows the algorithm to make queries on players' payoffs at pure strategy profiles. Randomized regret-based dynamics are known to yield an approximate correlated equilibrium efficiently, namely, in time that is polynomial in the number of players nn. Here we show that both randomization and approximation are necessary: no efficient deterministic algorithm can reach even an approximate correlated equilibrium, and no efficient randomized algorithm can reach an exact correlated equilibrium. The results are obtained by bounding from below the number of payoff queries that are needed
    • …
    corecore