8 research outputs found
Recommended from our members
Toward practical argument systems for verifiable computation
textHow can a client extract useful work from a server without trusting it to compute correctly? A modern motivation for this classic question is third party computing models in which customers outsource their computations to service providers (as in cloud computing). In principle, deep results in complexity theory and cryptography imply that it is possible to verify that an untrusted entity executed a computation correctly. For instance, the server can employ probabilistically checkable proofs (PCPs) in conjunction with cryptographic commitments to generate a succinct proof of correct execution, which the client can efficiently check. However, these theoretical solutions are impractical: they require thousands of CPU years to verifiably execute even simple computations. This dissertation describes the design, implementation, and experimental evaluation viiiof a system, called Pepper, that brings this theory into the realm of plausibility. Pepper incorporates a series of algorithmic improvements and systems engineering techniques to improve performance by over 20 orders of magnitude, relative to an implementation of the theory without our refinements. These include a new probabilistically checkable proof encoding with nearly optimal asymptotics, a concise representation for computations, a more efficient cryptographic commitment primitive, and a distributed implementation of the server with GPU acceleration to reduce latency. Additionally, Pepper extends the verification machinery to handle realistic applications of third party computing: those that interact with remote storage or state (e.g., MapReduce jobs, database queries). To do so, Pepper composes techniques from untrusted storage with the aforementioned technical machinery to verifiably offload both computations and state. Furthermore, to make it easy to use this technology, Pepper includes a compiler to automatically transform programs in a subset of C into executables that run verifiably. One of the chief limitations of Pepper is that verifiable execution is still orders of magnitude slower than an unverifiable native execution. Nonetheless, Pepper takes powerful results from complexity theory and verifiable computation a few steps closer to practicalityComputer Science
Aspects of practical implementations of PRAM algorithms
The PRAM is a shared memory model of parallel computation which abstracts away from inessential engineering details. It provides a very simple architecture independent model and provides a good programming environment. Theoreticians of the computer science community have proved that it is possible to emulate the theoretical PRAM model using current technology. Solutions have been found for effectively interconnecting processing elements, for routing data on these networks and for distributing the data among memory modules without hotspots. This thesis reviews this emulation and the possibilities it provides for large scale general purpose parallel computation. The emulation employs a bridging model which acts as an interface between the actual hardware and the PRAM model. We review the evidence that such a scheme crn achieve scalable parallel performance and portable parallel software and that PRAM algorithms can be optimally implemented on such practical models. In the course of this review we presented the following new results:
1. Concerning parallel approximation algorithms, we describe an NC algorithm for finding an approximation to a minimum weight perfect matching in a complete weighted graph. The algorithm is conceptually very simple and it is also the first NC-approximation algorithm for the task with a sub-linear performance ratio.
2. Concerning graph embedding, we describe dense edge-disjoint embeddings of the complete binary tree with n leaves in the following n-node communication networks: the hypercube, the de Bruijn and shuffle-exchange networks and the 2-dimcnsional mesh. In the embeddings the maximum distance from a leaf to the root of the tree is asymptotically optimally short. The embeddings facilitate efficient implementation of many PRAM algorithms on networks employing these graphs as interconnection networks.
3. Concerning bulk synchronous algorithmics, we describe scalable transportable algorithms for the following three commonly required types of computation; balanced tree computations. Fast Fourier Transforms and matrix multiplications
A likelihood ratio analysis of digital phase modulation
Bibliography: p. 180-188.Although the likelihood ratio forms the theoretical basis for maximum likelihood (ML) detection in coherent digital communication systems, it has not been applied directly to the problem of designing good trellis-coded modulation (TOM) schemes. The remarkably simple optimal receiver of minimum shift keying (MSK) has been shown to result from the mathematical simplification of its likelihood ratio into a single term. The log-likelihood ratio then becomes a linear sum of metrics which can be implemented as a so-called simplified receiver, comprising only a few adders and delay elements. This thesis project investigated the possible existence of coded modulation schemes with similarly simplifying likelihood ratios, which would have almost trivially simple receivers compared to the Viterbi decoders which are typically required for maximum likelihood sequence estimation (MLSE). A useful notation, called the likelihood transform, was presented to aid the analysis of likelihood ratios. The work concentrated initially on computer-aided searches, first for trellis codes which may give rise to simplifying likelihood ratios for continuous phase modulation (CPM), and then for mathematical identities which may aid in the simplification of generic likelihood ratios for equal-energy modulation. The first search yielded no simplified receivers, and all the identities produced by the second search had structures similar to the likelihood ratio of MSK. These observations prompted a formal proof of the non-existence of simplified receivers which use information from more than two symbols in their observation period. This result strictly bounds the error performance that is possible with a simplified receiver. It was also proved that simplified receivers are only optimal for modulation schemes which use no more than two pairs of antipodal signals, and that only binary modulation schemes can have simplified receivers which use information from all the symbols in their observation period