30 research outputs found
Invariant Aggregator for Defending against Federated Backdoor Attacks
Federated learning enables training high-utility models across several
clients without directly sharing their private data. As a downside, the
federated setting makes the model vulnerable to various adversarial attacks in
the presence of malicious clients. Despite the theoretical and empirical
success in defending against attacks that aim to degrade models' utility,
defense against backdoor attacks that increase model accuracy on backdoor
samples exclusively without hurting the utility on other samples remains
challenging. To this end, we first analyze the failure modes of existing
defenses over a flat loss landscape, which is common for well-designed neural
networks such as Resnet (He et al., 2015) but is often overlooked by previous
works. Then, we propose an invariant aggregator that redirects the aggregated
update to invariant directions that are generally useful via selectively
masking out the update elements that favor few and possibly malicious clients.
Theoretical results suggest that our approach provably mitigates backdoor
attacks and remains effective over flat loss landscapes. Empirical results on
three datasets with different modalities and varying numbers of clients further
demonstrate that our approach mitigates a broad class of backdoor attacks with
a negligible cost on the model utility.Comment: AISTATS 2024 camera-read
PRO-ORAM: Constant Latency Read-Only Oblivious RAM
Oblivious RAM is a well-known cryptographic primitive to hide data
access patterns. However, the best known ORAM schemes require a logarithmic
computation time in the general case which makes it infeasible for use in real-world applications. In practice, hiding data access patterns should incur a constant
latency per access.
In this work, we present PRO-ORAM --- an ORAM construction that
achieves constant latencies per access in a large class of applications. PRO-ORAM theoretically and empirically
guarantees this for read-only data access
patterns, wherein data is written once followed by read requests.
It makes hiding data access pattern practical
for read-only workloads, incurring sub-second computational latencies
per access for data blocks of 256 KB, over large (gigabyte-sized)
datasets.PRO-ORAM supports throughputs of tens to hundreds of MBps
for fetching blocks, which exceeds network bandwidth available to
average users today. Our experiments suggest that dominant factor in
latency offered by PRO-ORAM is the inherent network throughput of
transferring final blocks, rather than the computational latencies of
the protocol. At its heart, PRO-ORAM utilizes key observations
enabling an aggressively parallelized algorithm of an ORAM construction and
a permutation operation, as well as the use of trusted computing
technique (SGX) that not only provides safety but also offers the
advantage of lowering communication costs