32,965 research outputs found
A Randomized Kernel-Based Secret Image Sharing Scheme
This paper proposes a ()-threshold secret image sharing scheme that
offers flexibility in terms of meeting contrasting demands such as information
security and storage efficiency with the help of a randomized kernel (binary
matrix) operation. A secret image is split into shares such that any or
more shares () can be used to reconstruct the image. Each share has a
size less than or at most equal to the size of the secret image. Security and
share sizes are solely determined by the kernel of the scheme. The kernel
operation is optimized in terms of the security and computational requirements.
The storage overhead of the kernel can further be made independent of its size
by efficiently storing it as a sparse matrix. Moreover, the scheme is free from
any kind of single point of failure (SPOF).Comment: Accepted in IEEE International Workshop on Information Forensics and
Security (WIFS) 201
Chameleon: A Hybrid Secure Computation Framework for Machine Learning Applications
We present Chameleon, a novel hybrid (mixed-protocol) framework for secure
function evaluation (SFE) which enables two parties to jointly compute a
function without disclosing their private inputs. Chameleon combines the best
aspects of generic SFE protocols with the ones that are based upon additive
secret sharing. In particular, the framework performs linear operations in the
ring using additively secret shared values and nonlinear
operations using Yao's Garbled Circuits or the Goldreich-Micali-Wigderson
protocol. Chameleon departs from the common assumption of additive or linear
secret sharing models where three or more parties need to communicate in the
online phase: the framework allows two parties with private inputs to
communicate in the online phase under the assumption of a third node generating
correlated randomness in an offline phase. Almost all of the heavy
cryptographic operations are precomputed in an offline phase which
substantially reduces the communication overhead. Chameleon is both scalable
and significantly more efficient than the ABY framework (NDSS'15) it is based
on. Our framework supports signed fixed-point numbers. In particular,
Chameleon's vector dot product of signed fixed-point numbers improves the
efficiency of mining and classification of encrypted data for algorithms based
upon heavy matrix multiplications. Our evaluation of Chameleon on a 5 layer
convolutional deep neural network shows 133x and 4.2x faster executions than
Microsoft CryptoNets (ICML'16) and MiniONN (CCS'17), respectively
- …