1,191 research outputs found
Delegating Computations with (almost) Minimal Time and Space Overhead
The problem of verifiable delegation of computation considers a setting in which a client wishes to outsource an expensive computation to a powerful, but untrusted, server. Since the client does not trust the server, we would like the server to certify the correctness of the result. Delegation has emerged as a central problem in cryptography, with a flurry of recent activity in both theory and practice. In all of these works, the main bottleneck is the overhead incurred by the server, both in time and in space.
Assuming (sub-exponential) LWE, we construct a one-round argument-system for proving the correctness of any time and space RAM computation, in which both the verifier and prover are highly efficient. The verifier runs in time and space , where is the input length. Assuming , the prover runs in time and space , and in many natural cases even . Our solution uses somewhat homomorphic encryption but, surprisingly, only requires homomorphic evaluation of arithmetic circuits having multiplicative depth (which is a main bottleneck in homomorphic encryption) .
Prior works based on standard assumptions had a time prover, where (at the very least). As for the space usage, we are unaware of any work, even based on non-standard assumptions, that has space usage .
Along the way to constructing our delegation scheme, we introduce several technical tools that we believe may be useful for future work
DeepSecure: Scalable Provably-Secure Deep Learning
This paper proposes DeepSecure, a novel framework that enables scalable
execution of the state-of-the-art Deep Learning (DL) models in a
privacy-preserving setting. DeepSecure targets scenarios in which neither of
the involved parties including the cloud servers that hold the DL model
parameters or the delegating clients who own the data is willing to reveal
their information. Our framework is the first to empower accurate and scalable
DL analysis of data generated by distributed clients without sacrificing the
security to maintain efficiency. The secure DL computation in DeepSecure is
performed using Yao's Garbled Circuit (GC) protocol. We devise GC-optimized
realization of various components used in DL. Our optimized implementation
achieves more than 58-fold higher throughput per sample compared with the
best-known prior solution. In addition to our optimized GC realization, we
introduce a set of novel low-overhead pre-processing techniques which further
reduce the GC overall runtime in the context of deep learning. Extensive
evaluations of various DL applications demonstrate up to two
orders-of-magnitude additional runtime improvement achieved as a result of our
pre-processing methodology. This paper also provides mechanisms to securely
delegate GC computations to a third party in constrained embedded settings
Unifying Quantum Verification and Error-Detection: Theory and Tools for Optimisations
With the recent availability of cloud quantum computing services, the
question of verifying quantum computations delegated by a client to a quantum
server is becoming of practical interest. While Verifiable Blind Quantum
Computing (VBQC) has emerged as one of the key approaches to address this
challenge, current protocols still need to be optimised before they are truly
practical.
To this end, we establish a fundamental correspondence between
error-detection and verification and provide sufficient conditions to both
achieve security in the Abstract Cryptography framework and optimise resource
overheads of all known VBQC-based protocols. As a direct application, we
demonstrate how to systematise the search for new efficient and robust
verification protocols for computations. While we have chosen
Measurement-Based Quantum Computing (MBQC) as the working model for the
presentation of our results, one could expand the domain of applicability of
our framework via direct known translation between the circuit model and MBQC.Comment: 45 pages, 9 figure
Enabling GPU Support for the COMPSs-Mobile Framework
Using the GPUs embedded in mobile devices allows for increasing the performance of the applications running on them while reducing the energy consumption of their execution. This article presents a task-based solution for adaptative, collaborative heterogeneous computing on mobile cloud environments. To implement our proposal, we extend the COMPSs-Mobile framework – an implementation of the COMPSs programming model for building mobile applications that offload part of the computation to the Cloud – to support offloading computation to GPUs through OpenCL. To evaluate our solution, we subject the prototype to three benchmark applications representing different application patterns.This work is partially supported by the Joint-Laboratory on Extreme Scale Computing (JLESC), by the European Union through the Horizon 2020 research and innovation programme under contract 687584 (TANGO Project), by the Spanish Goverment (TIN2015-65316-P, BES-2013-067167, EEBB-2016-11272, SEV-2011-00067) and the Generalitat de Catalunya (2014-SGR-1051).Peer ReviewedPostprint (author's final draft
- …