9,930 research outputs found
Using Trusted Execution Environments for Secure Stream Processing of Medical Data
Processing sensitive data, such as those produced by body sensors, on
third-party untrusted clouds is particularly challenging without compromising
the privacy of the users generating it. Typically, these sensors generate large
quantities of continuous data in a streaming fashion. Such vast amount of data
must be processed efficiently and securely, even under strong adversarial
models. The recent introduction in the mass-market of consumer-grade processors
with Trusted Execution Environments (TEEs), such as Intel SGX, paves the way to
implement solutions that overcome less flexible approaches, such as those atop
homomorphic encryption. We present a secure streaming processing system built
on top of Intel SGX to showcase the viability of this approach with a system
specifically fitted for medical data. We design and fully implement a prototype
system that we evaluate with several realistic datasets. Our experimental
results show that the proposed system achieves modest overhead compared to
vanilla Spark while offering additional protection guarantees under powerful
attackers and threat models.Comment: 19th International Conference on Distributed Applications and
Interoperable System
SoK: A Systematic Review of TEE Usage for Developing Trusted Applications
Trusted Execution Environments (TEEs) are a feature of modern central
processing units (CPUs) that aim to provide a high assurance, isolated
environment in which to run workloads that demand both confidentiality and
integrity. Hardware and software components in the CPU isolate workloads,
commonly referred to as Trusted Applications (TAs), from the main operating
system (OS). This article aims to analyse the TEE ecosystem, determine its
usability, and suggest improvements where necessary to make adoption easier. To
better understand TEE usage, we gathered academic and practical examples from a
total of 223 references. We summarise the literature and provide a publication
timeline, along with insights into the evolution of TEE research and
deployment. We categorise TAs into major groups and analyse the tools available
to developers. Lastly, we evaluate trusted container projects, test
performance, and identify the requirements for migrating applications inside
them.Comment: In The 18th International Conference on Availability, Reliability and
Security (ARES 2023), August 29 -- September 01, 2023, Benevento, Italy. 15
page
A Hybrid Approach to Privacy-Preserving Federated Learning
Federated learning facilitates the collaborative training of models without
the sharing of raw data. However, recent attacks demonstrate that simply
maintaining data locality during training processes does not provide sufficient
privacy guarantees. Rather, we need a federated learning system capable of
preventing inference over both the messages exchanged during training and the
final trained model while ensuring the resulting model also has acceptable
predictive accuracy. Existing federated learning approaches either use secure
multiparty computation (SMC) which is vulnerable to inference or differential
privacy which can lead to low accuracy given a large number of parties with
relatively small amounts of data each. In this paper, we present an alternative
approach that utilizes both differential privacy and SMC to balance these
trade-offs. Combining differential privacy with secure multiparty computation
enables us to reduce the growth of noise injection as the number of parties
increases without sacrificing privacy while maintaining a pre-defined rate of
trust. Our system is therefore a scalable approach that protects against
inference threats and produces models with high accuracy. Additionally, our
system can be used to train a variety of machine learning models, which we
validate with experimental results on 3 different machine learning algorithms.
Our experiments demonstrate that our approach out-performs state of the art
solutions
- …