1,206 research outputs found
Securely Outsourcing Large Scale Eigen Value Problem to Public Cloud
Cloud computing enables clients with limited computational power to
economically outsource their large scale computations to a public cloud with
huge computational power. Cloud has the massive storage, computational power
and software which can be used by clients for reducing their computational
overhead and storage limitation. But in case of outsourcing, privacy of
client's confidential data must be maintained. We have designed a protocol for
outsourcing large scale Eigen value problem to a malicious cloud which provides
input/output data security, result verifiability and client's efficiency. As
the direct computation method to find all eigenvectors is computationally
expensive for large dimensionality, we have used power iterative method for
finding the largest Eigen value and the corresponding Eigen vector of a matrix.
For protecting the privacy, some transformations are applied to the input
matrix to get encrypted matrix which is sent to the cloud and then decrypting
the result that is returned from the cloud for getting the correct solution of
Eigen value problem. We have also proposed result verification mechanism for
detecting robust cheating and provided theoretical analysis and experimental
result that describes high-efficiency, correctness, security and robust
cheating resistance of the proposed protocol
Private Outsourcing of Polynomial Evaluation and Matrix Multiplication using Multilinear Maps
{\em Verifiable computation} (VC) allows a computationally weak client to
outsource the evaluation of a function on many inputs to a powerful but
untrusted server. The client invests a large amount of off-line computation and
gives an encoding of its function to the server. The server returns both an
evaluation of the function on the client's input and a proof such that the
client can verify the evaluation using substantially less effort than doing the
evaluation on its own. We consider how to privately outsource computations
using {\em privacy preserving} VC schemes whose executions reveal no
information on the client's input or function to the server. We construct VC
schemes with {\em input privacy} for univariate polynomial evaluation and
matrix multiplication and then extend them such that the {\em function privacy}
is also achieved. Our tool is the recently developed {mutilinear maps}. The
proposed VC schemes can be used in outsourcing {private information retrieval
(PIR)}.Comment: 23 pages, A preliminary version appears in the 12th International
Conference on Cryptology and Network Security (CANS 2013
DeepSecure: Scalable Provably-Secure Deep Learning
This paper proposes DeepSecure, a novel framework that enables scalable
execution of the state-of-the-art Deep Learning (DL) models in a
privacy-preserving setting. DeepSecure targets scenarios in which neither of
the involved parties including the cloud servers that hold the DL model
parameters or the delegating clients who own the data is willing to reveal
their information. Our framework is the first to empower accurate and scalable
DL analysis of data generated by distributed clients without sacrificing the
security to maintain efficiency. The secure DL computation in DeepSecure is
performed using Yao's Garbled Circuit (GC) protocol. We devise GC-optimized
realization of various components used in DL. Our optimized implementation
achieves more than 58-fold higher throughput per sample compared with the
best-known prior solution. In addition to our optimized GC realization, we
introduce a set of novel low-overhead pre-processing techniques which further
reduce the GC overall runtime in the context of deep learning. Extensive
evaluations of various DL applications demonstrate up to two
orders-of-magnitude additional runtime improvement achieved as a result of our
pre-processing methodology. This paper also provides mechanisms to securely
delegate GC computations to a third party in constrained embedded settings
Privacy-Preserving Outsourcing of Large-Scale Nonlinear Programming to the Cloud
The increasing massive data generated by various sources has given birth to
big data analytics. Solving large-scale nonlinear programming problems (NLPs)
is one important big data analytics task that has applications in many domains
such as transport and logistics. However, NLPs are usually too computationally
expensive for resource-constrained users. Fortunately, cloud computing provides
an alternative and economical service for resource-constrained users to
outsource their computation tasks to the cloud. However, one major concern with
outsourcing NLPs is the leakage of user's private information contained in NLP
formulations and results. Although much work has been done on
privacy-preserving outsourcing of computation tasks, little attention has been
paid to NLPs. In this paper, we for the first time investigate secure
outsourcing of general large-scale NLPs with nonlinear constraints. A secure
and efficient transformation scheme at the user side is proposed to protect
user's private information; at the cloud side, generalized reduced gradient
method is applied to effectively solve the transformed large-scale NLPs. The
proposed protocol is implemented on a cloud computing testbed. Experimental
evaluations demonstrate that significant time can be saved for users and the
proposed mechanism has the potential for practical use.Comment: Ang Li and Wei Du equally contributed to this work. This work was
done when Wei Du was at the University of Arkansas. 2018 EAI International
Conference on Security and Privacy in Communication Networks (SecureComm
- …