30 research outputs found
Secure distributed matrix computation with discrete fourier transform
We consider the problem of secure distributed matrix computation (SDMC), where a user queries a function of data matrices generated at distributed source nodes. We assume the availability of N honest but curious computation servers, which are connected to the sources, the user, and each other through orthogonal and reliable communication links. Our goal is to minimize the amount of data that must be transmitted from the sources to the servers, called the upload cost, while guaranteeing that no T colluding servers can learn any information about the source matrices, and the user cannot learn any information beyond the computation result. We first focus on secure distributed matrix multiplication (SDMM), considering two matrices, and propose a novel polynomial coding scheme using the properties of finite field discrete Fourier transform, which achieves an upload cost significantly lower than the existing results in the literature. We then generalize the proposed scheme to include straggler mitigation, and to the multiplication of multiple matrices while keeping the input matrices, the intermediate computation results, as well as the final result secure against any T colluding servers. We also consider a special case, called computation with own data, where the data matrices used for computation belong to the user. In this case, we drop the security requirement against the user, and show that the proposed scheme achieves the minimal upload cost. We then propose methods for performing other common matrix computations securely on distributed servers, including changing the parameters of secret sharing, matrix transpose, matrix exponentiation, solving a linear system, and matrix inversion, which are then used to show how arbitrary matrix polynomials can be computed securely on distributed servers using the proposed procedur
Sparsity and Privacy in Secret Sharing: A Fundamental Trade-Off
This work investigates the design of sparse secret sharing schemes that
encode a sparse private matrix into sparse shares. This investigation is
motivated by distributed computing, where the multiplication of sparse and
private matrices is moved from a computationally weak main node to untrusted
worker machines. Classical secret-sharing schemes produce dense shares.
However, sparsity can help speed up the computation. We show that, for matrices
with i.i.d. entries, sparsity in the shares comes at a fundamental cost of
weaker privacy. We derive a fundamental tradeoff between sparsity and privacy
and construct optimal sparse secret sharing schemes that produce shares that
leak the minimum amount of information for a desired sparsity of the shares. We
apply our schemes to distributed sparse and private matrix multiplication
schemes with no colluding workers while tolerating stragglers. For the setting
of two non-communicating clusters of workers, we design a sparse one-time pad
so that no private information is leaked to a cluster of untrusted and
colluding workers, and the shares with bounded but non-zero leakage are
assigned to a cluster of partially trusted workers. We conclude by discussing
the necessity of using permutations for matrices with correlated entries
Distributed and Private Coded Matrix Computation with Flexible Communication Load
Tensor operations, such as matrix multiplication, are central to large-scale
machine learning applications. For user-driven tasks these operations can be
carried out on a distributed computing platform with a master server at the
user side and multiple workers in the cloud operating in parallel. For
distributed platforms, it has been recently shown that coding over the input
data matrices can reduce the computational delay, yielding a trade-off between
recovery threshold and communication load. In this paper we impose an
additional security constraint on the data matrices and assume that workers can
collude to eavesdrop on the content of these data matrices. Specifically, we
introduce a novel class of secure codes, referred to as secure generalized
PolyDot codes, that generalizes previously published non-secure versions of
these codes for matrix multiplication. These codes extend the state-of-the-art
by allowing a flexible trade-off between recovery threshold and communication
load for a fixed maximum number of colluding workers.Comment: 8 pages, 6 figures, submitted to 2019 IEEE International Symposium on
Information Theory (ISIT