490 research outputs found

    Efficient Privacy-Preserving Machine Learning with Lightweight Trusted Hardware

    Full text link
    In this paper, we propose a new secure machine learning inference platform assisted by a small dedicated security processor, which will be easier to protect and deploy compared to today's TEEs integrated into high-performance processors. Our platform provides three main advantages over the state-of-the-art: (i) We achieve significant performance improvements compared to state-of-the-art distributed Privacy-Preserving Machine Learning (PPML) protocols, with only a small security processor that is comparable to a discrete security chip such as the Trusted Platform Module (TPM) or on-chip security subsystems in SoCs similar to the Apple enclave processor. In the semi-honest setting with WAN/GPU, our scheme is 4X-63X faster than Falcon (PoPETs'21) and AriaNN (PoPETs'22) and 3.8X-12X more communication efficient. We achieve even higher performance improvements in the malicious setting. (ii) Our platform guarantees security with abort against malicious adversaries under honest majority assumption. (iii) Our technique is not limited by the size of secure memory in a TEE and can support high-capacity modern neural networks like ResNet18 and Transformer. While previous work investigated the use of high-performance TEEs in PPML, this work represents the first to show that even tiny secure hardware with really limited performance can be leveraged to significantly speed-up distributed PPML protocols if the protocol can be carefully designed for lightweight trusted hardware.Comment: IEEE S&P'24 submitte

    Confidential Machine Learning on Untrusted Platforms: a Survey

    Get PDF
    With the ever-growing data and the need for developing powerful machine learning models, data owners increasingly depend on various untrusted platforms (e.g., public clouds, edges, and machine learning service providers) for scalable processing or collaborative learning. Thus, sensitive data and models are in danger of unauthorized access, misuse, and privacy compromises. A relatively new body of research confidentially trains machine learning models on protected data to address these concerns. In this survey, we summarize notable studies in this emerging area of research. With a unified framework, we highlight the critical challenges and innovations in outsourcing machine learning confidentially. We focus on the cryptographic approaches for confidential machine learning (CML), primarily on model training, while also covering other directions such as perturbation-based approaches and CML in the hardware-assisted computing environment. The discussion will take a holistic way to consider a rich context of the related threat models, security assumptions, design principles, and associated trade-offs amongst data utility, cost, and confidentiality

    Server-Aided Privacy-Preserving Proximity Testing

    Get PDF
    Proximity testing is at the core of many Location-Based online Services (LBS) which we use in our daily lives to order taxis, find places of interest nearby, connect with people. Currently, most such services expect a user to submit his location to them and trust the LBS not to abuse this information, and use it only to provide the service. Existing cases of such information being misused (e.g., by the LBS employees or criminals who breached its security) motivates the search for better solutions that would ensure the privacy of user data, and give users control of how their data is being used.In this thesis, we address this problem using cryptographic techniques. We propose three cryptographic protocols that allow two users to perform proximity testing (check if they are close enough to each other) with the help of two servers.In the papers 1 and 2, the servers are introduced in order to allow users not to be online at the same time: one user may submit their location to the servers and go offline, the other user coming online later and finishing proximity testing. The drastically improves the practicality of such protocols, since the mobile devices that users usually run may not always be online. We stress that the servers in these protocols merely aid the users in performing the proximity testing, and none of the servers can independently extract the user data.In the paper 3, we use the servers to offload the users\u27 computation and communication to. The servers here pre-generate correlated random data and send it to users, who can use it to perform a secure proximity testing protocol faster. Paper 3, together with the paper 2, are highly practical: they provide strong security guarantees and are suitable to be executed on resource-constrained mobile devices. In fact, the work of clients in these protocols is close to negligible as most of the work is done by servers
    • …
    corecore