29 research outputs found
PRO-ORAM: Constant Latency Read-Only Oblivious RAM
Oblivious RAM is a well-known cryptographic primitive to hide data
access patterns. However, the best known ORAM schemes require a logarithmic
computation time in the general case which makes it infeasible for use in real-world applications. In practice, hiding data access patterns should incur a constant
latency per access.
In this work, we present PRO-ORAM --- an ORAM construction that
achieves constant latencies per access in a large class of applications. PRO-ORAM theoretically and empirically
guarantees this for read-only data access
patterns, wherein data is written once followed by read requests.
It makes hiding data access pattern practical
for read-only workloads, incurring sub-second computational latencies
per access for data blocks of 256 KB, over large (gigabyte-sized)
datasets.PRO-ORAM supports throughputs of tens to hundreds of MBps
for fetching blocks, which exceeds network bandwidth available to
average users today. Our experiments suggest that dominant factor in
latency offered by PRO-ORAM is the inherent network throughput of
transferring final blocks, rather than the computational latencies of
the protocol. At its heart, PRO-ORAM utilizes key observations
enabling an aggressively parallelized algorithm of an ORAM construction and
a permutation operation, as well as the use of trusted computing
technique (SGX) that not only provides safety but also offers the
advantage of lowering communication costs
Re-aligning Shadow Models can Improve White-box Membership Inference Attacks
Machine learning models have been shown to leak sensitive information about
their training datasets. As models are being increasingly used, on devices, to
automate tasks and power new applications, there have been concerns that such
white-box access to its parameters, as opposed to the black-box setting which
only provides query access to the model, increases the attack surface. Directly
extending the shadow modelling technique from the black-box to the white-box
setting has been shown, in general, not to perform better than black-box only
attacks. A key reason is misalignment, a known characteristic of deep neural
networks. We here present the first systematic analysis of the causes of
misalignment in shadow models and show the use of a different weight
initialisation to be the main cause of shadow model misalignment. Second, we
extend several re-alignment techniques, previously developed in the model
fusion literature, to the shadow modelling context, where the goal is to
re-align the layers of a shadow model to those of the target model.We show
re-alignment techniques to significantly reduce the measured misalignment
between the target and shadow models. Finally, we perform a comprehensive
evaluation of white-box membership inference attacks (MIA). Our analysis
reveals that (1) MIAs suffer from misalignment between shadow models, but that
(2) re-aligning the shadow models improves, sometimes significantly, MIA
performance. On the CIFAR10 dataset with a false positive rate of 1\%,
white-box MIA using re-aligned shadow models improves the true positive rate by
4.5\%.Taken together, our results highlight that on-device deployment increase
the attack surface and that the newly available information can be used by an
attacker