1,827 research outputs found

    Streaming Verification of Graph Computations via Graph Structure

    Get PDF
    We give new algorithms in the annotated data streaming setting - also known as verifiable data stream computation - for certain graph problems. This setting is meant to model outsourced computation, where a space-bounded verifier limited to sequential data access seeks to overcome its computational limitations by engaging a powerful prover, without needing to trust the prover. As is well established, several problems that admit no sublinear-space algorithms under traditional streaming do allow protocols using a sublinear amount of prover/verifier communication and sublinear-space verification. We give algorithms for many well-studied graph problems including triangle counting, its generalization to subgraph counting, maximum matching, problems about the existence (or not) of short paths, finding the shortest path between two vertices, and testing for an independent set. While some of these problems have been studied before, our results achieve new tradeoffs between space and communication costs that were hitherto unknown. In particular, two of our results disprove explicit conjectures of Thaler (ICALP, 2016) by giving triangle counting and maximum matching algorithms for n-vertex graphs, using o(n) space and o(n^2) communication

    An Improved Interactive Streaming Algorithm for the Distinct Elements Problem

    Full text link
    The exact computation of the number of distinct elements (frequency moment F0F_0) is a fundamental problem in the study of data streaming algorithms. We denote the length of the stream by nn where each symbol is drawn from a universe of size mm. While it is well known that the moments F0,F1,F2F_0,F_1,F_2 can be approximated by efficient streaming algorithms, it is easy to see that exact computation of F0,F2F_0,F_2 requires space Ω(m)\Omega(m). In previous work, Cormode et al. therefore considered a model where the data stream is also processed by a powerful helper, who provides an interactive proof of the result. They gave such protocols with a polylogarithmic number of rounds of communication between helper and verifier for all functions in NC. This number of rounds (O(log2m)  in the case of  F0)\left(O(\log^2 m) \;\text{in the case of} \;F_0 \right) can quickly make such protocols impractical. Cormode et al. also gave a protocol with logm+1\log m +1 rounds for the exact computation of F0F_0 where the space complexity is O(logmlogn+log2m)O\left(\log m \log n+\log^2 m\right) but the total communication O(nlogm(logn+logm))O\left(\sqrt{n}\log m\left(\log n+ \log m \right)\right). They managed to give logm\log m round protocols with polylog(m,n)\operatorname{polylog}(m,n) complexity for many other interesting problems including F2F_2, Inner product, and Range-sum, but computing F0F_0 exactly with polylogarithmic space and communication and O(logm)O(\log m) rounds remained open. In this work, we give a streaming interactive protocol with logm\log m rounds for exact computation of F0F_0 using O(logm(logn+logmloglogm))O\left(\log m \left(\,\log n + \log m \log\log m\,\right)\right) bits of space and the communication is O(logm(logn+log3m(loglogm)2))O\left( \log m \left(\,\log n +\log^3 m (\log\log m)^2 \,\right)\right). The update time of the verifier per symbol received is O(log2m)O(\log^2 m).Comment: Submitted to ICALP 201

    Streaming Verification of Graph Properties

    Get PDF
    Streaming interactive proofs (SIPs) are a framework for outsourced computation. A computationally limited streaming client (the verifier) hands over a large data set to an untrusted server (the prover) in the cloud and the two parties run a protocol to confirm the correctness of result with high probability. SIPs are particularly interesting for problems that are hard to solve (or even approximate) well in a streaming setting. The most notable of these problems is finding maximum matchings, which has received intense interest in recent years but has strong lower bounds even for constant factor approximations. In this paper, we present efficient streaming interactive proofs that can verify maximum matchings exactly. Our results cover all flavors of matchings (bipartite/non-bipartite and weighted). In addition, we also present streaming verifiers for approximate metric TSP. In particular, these are the first efficient results for weighted matchings and for metric TSP in any streaming verification model.Comment: 26 pages, 2 figure, 1 tabl

    Semi-Streaming Algorithms for Annotated Graph Streams

    Get PDF
    Considerable effort has been devoted to the development of streaming algorithms for analyzing massive graphs. Unfortunately, many results have been negative, establishing that a wide variety of problems require Ω(n2)\Omega(n^2) space to solve. One of the few bright spots has been the development of semi-streaming algorithms for a handful of graph problems -- these algorithms use space O(npolylog(n))O(n\cdot\text{polylog}(n)). In the annotated data streaming model of Chakrabarti et al., a computationally limited client wants to compute some property of a massive input, but lacks the resources to store even a small fraction of the input, and hence cannot perform the desired computation locally. The client therefore accesses a powerful but untrusted service provider, who not only performs the requested computation, but also proves that the answer is correct. We put forth the notion of semi-streaming algorithms for annotated graph streams (semi-streaming annotation schemes for short). These are protocols in which both the client's space usage and the length of the proof are O(npolylog(n))O(n \cdot \text{polylog}(n)). We give evidence that semi-streaming annotation schemes represent a substantially more robust solution concept than does the standard semi-streaming model. On the positive side, we give semi-streaming annotation schemes for two dynamic graph problems that are intractable in the standard model: (exactly) counting triangles, and (exactly) computing maximum matchings. The former scheme answers a question of Cormode. On the negative side, we identify for the first time two natural graph problems (connectivity and bipartiteness in a certain edge update model) that can be solved in the standard semi-streaming model, but cannot be solved by annotation schemes of "sub-semi-streaming" cost. That is, these problems are just as hard in the annotations model as they are in the standard model.Comment: This update includes some additional discussion of the results proven. The result on counting triangles was previously included in an ECCC technical report by Chakrabarti et al. available at http://eccc.hpi-web.de/report/2013/180/. That report has been superseded by this manuscript, and the CCC 2015 paper "Verifiable Stream Computation and Arthur-Merlin Communication" by Chakrabarti et a

    New Verification Schemes for Frequency-Based Functions on Data Streams

    Get PDF
    We study the general problem of computing frequency-based functions, i.e., the sum of any given function of data stream frequencies. Special cases include fundamental data stream problems such as computing the number of distinct elements (F0F_0), frequency moments (FkF_k), and heavy-hitters. It can also be applied to calculate the maximum frequency of an element (FF_{\infty}). Given that exact computation of most of these special cases provably do not admit any sublinear space algorithm, a natural approach is to consider them in an enhanced data streaming model, where we have a computationally unbounded but untrusted prover sending proofs or help messages to ease the computation. Think of a memory-restricted client delegating the computation to a stronger cloud service whom it doesn't want to trust blindly. Using its limited memory, it wants to verify the proof that the cloud sends. Chakrabarti et al.~(ICALP '09) introduced this setting as the "annotated data streaming model" and showed that multiple problems including exact computation of frequency-based functions---that have no sublinear algorithms in basic streaming---do have annotated streaming algorithms, also called "schemes", with both space and proof-length sublinear in the input size. We give a general scheme for computing any frequency-based function with both space usage and proof-size of O(n2/3logn)O(n^{2/3}\log n) bits, where nn is the size of the universe. This improves upon the best known bound of O(n2/3log4/3n)O(n^{2/3}\log^{4/3} n) given by the seminal paper of Chakrabarti et al.~and as a result, also improves upon the best known bounds for the important special cases of computing F0F_0 and FF_{\infty}. We emphasize that while being quantitatively better, our scheme is also qualitatively better in the sense that it is simpler than the previously best scheme that uses intricate data structures and elaborate subroutines.Comment: To appear in FSTTCS 202

    Streaming Verification for Graph Problems: Optimal Tradeoffs and Nonlinear Sketches

    Get PDF
    We study graph computations in an enhanced data streaming setting, where a space-bounded client reading the edge stream of a massive graph may delegate some of its work to a cloud service. We seek algorithms that allow the client to verify a purported proof sent by the cloud service that the work done in the cloud is correct. A line of work starting with Chakrabarti et al. (ICALP 2009) has provided such algorithms, which we call schemes, for several statistical and graph-theoretic problems, many of which exhibit a tradeoff between the length of the proof and the space used by the streaming verifier. This work designs new schemes for a number of basic graph problems---including triangle counting, maximum matching, topological sorting, and single-source shortest paths---where past work had either failed to obtain smooth tradeoffs between these two key complexity measures or only obtained suboptimal tradeoffs. Our key innovation is having the verifier compute certain nonlinear sketches of the input stream, leading to either new or improved tradeoffs. In many cases, our schemes in fact provide optimal tradeoffs up to logarithmic factors. Specifically, for most graph problems that we study, it is known that the product of the verifier's space cost vv and the proof length hh must be at least Ω(n2)\Omega(n^2) for nn-vertex graphs. However, matching upper bounds are only known for a handful of settings of hh and vv on the curve hv=Θ~(n2)h \cdot v=\tilde{\Theta}(n^2). For example, for counting triangles and maximum matching, schemes with costs lying on this curve are only known for (h=O~(n2),v=O~(1))(h=\tilde{O}(n^2), v=\tilde{O}(1)), (h=O~(n),v=O~(n))(h=\tilde{O}(n), v=\tilde{O}(n)), and the trivial (h=O~(1),v=O~(n2))(h=\tilde{O}(1), v=\tilde{O}(n^2)). A major message of this work is that by exploiting nonlinear sketches, a significant ``portion'' of costs on the tradeoff curve hv=n2h \cdot v = n^2 can be achieved

    SCALABLE INTEGRATED CIRCUIT SIMULATION ALGORITHMS FOR ENERGY-EFFICIENT TERAFLOP HETEROGENEOUS PARALLEL COMPUTING PLATFORMS

    Get PDF
    Integrated circuit technology has gone through several decades of aggressive scaling.It is increasingly challenging to analyze growing design complexity. Post-layout SPICE simulation can be computationally prohibitive due to the huge amount of parasitic elements, which can easily boost the computation and memory cost. As the decrease in device size, the circuits become more vulnerable to process variations. Designers need to statistically simulate the probability that a circuit does not meet the performance metric, which requires millions times of simulations to capture rare failure events. Recent, multiprocessors with heterogeneous architecture have emerged as mainstream computing platforms. The heterogeneous computing platform can achieve highthroughput energy efficient computing. However, the application of such platform is not trivial and needs to reinvent existing algorithms to fully utilize the computing resources. This dissertation presents several new algorithms to address those aforementioned two significant and challenging issues on the heterogeneous platform. Harmonic Balance (HB) analysis is essential for efficient verification of large postlayout RF and microwave integrated circuits (ICs). However, existing methods either suffer from excessively long simulation time and prohibitively large memory consumption or exhibit poor stability. This dissertation introduces a novel transient-simulation guided graph sparsification technique, as well as an efficient runtime performance modeling approach tailored for heterogeneous manycore CPU-GPU computing system to build nearly-optimal subgraph preconditioners that can lead to minimum HB simulation runtime. Additionally, we propose a novel heterogeneous parallel sparse block matrix algorithm by taking advantages of the structure of HB Jacobian matrices as well as GPU’s streaming multiprocessors to achieve optimal workload balancing during the preconditioning phase of HB analysis. We also show how the proposed preconditioned iterative algorithm can efficiently adapt to heterogeneous computing systems with different CPU and GPU computing capabilities. Extensive experimental results show that our HB solver can achieve up to 20X speedups and 5X memory reduction when compared with the state-of-the-art direct solver highly optimized for twelve-core CPUs. In nowadays variation-aware IC designs, cell characterizations and SRAM memory yield analysis require many thousands or even millions of repeated SPICE simulations for relatively small nonlinear circuits. In this dissertation, for the first time, we present a massively parallel SPICE simulator on GPU, TinySPICE, for efficiently analyzing small nonlinear circuits. TinySPICE integrates a highly-optimized shared-memory based matrix solver and fast parametric three-dimensional (3D) LUTs based device evaluation method. A novel circuit clustering method is also proposed to improve the stability and efficiency of the matrix solver. Compared with CPU-based SPICE simulator, TinySPICE achieves up to 264X speedups for parametric SRAM yield analysis without loss of accuracy

    The Synchronized Filtering Dataflow

    Get PDF
    In the past decade, the world has seen the rise of big data, which calls for a paradigm shift in data processing. Streaming processing, where data are processed in their spatial or temporal order, is increasingly common. Meanwhile, parallel computing has become a household term in the computing world. The combination of streaming processing and parallel computing, streaming computing, has been playing an important role in data processing. A streaming computing system is a network of nodes connected by unidirectional first-in first-out (FIFO) data channels. When a node has multiple input channels, to ensure the deterministic behavior of the whole system, synchronization is required on those channels when the node consumes data. After a streaming computing node finishes a computation, it may choose not to produce output on some of its output channels. This behavior, known as filtering, is data-dependent and unpredictable. When filtered data streams are synchronized, applications can deadlock due to empty and full channel buffers. To avoid deadlocks and ensure bounded-memory execution, we turn to model-based approaches. In this dissertation, we propose the synchronized filtering dataflow (SFDF) to model synchronization and filtering behaviors. We avoid deadlocks in SFDF applications by augmenting data streams with dummy messages. We design decentralized algorithms that compute a dummy interval for each channel during compilation time and schedule dummy messages according to the dummy intervals during runtime. The runtime parts of our algorithms are very efficient, adding little overhead to computing nodes, but computing dummy intervals could be very time-consuming on general dataflow graphs. We design efficient algorithms to compute dummy intervals for streaming applications with special topologies. In particular, we focus on series-parallel directed acyclic graphs (SP-DAGs) and CS4 DAGs, where each undirected cycle is single-source and single-sink. We further extend our work to describe a set of polyhedral constraints that define all sets of safe dummy intervals for any dataflow graphs, which gives us more flexibility to choose dummy intervals. We also provide a polynomial-time algorithm to verify the safety of given dummy intervals for SP-DAGs. Dummy messages are only one type of control message used by streaming applications. We extend our SFDF model to support more types of control message, which are precisely synchronized with data streams. We use two types of control messages, dummy message and credit message, to guarantee bounded-memory execution. We demonstrate that the extended model can help improve performance of some applications by adding filtering behavior to non-filtering applications
    corecore