574 research outputs found
Separating Two-Round Secure Computation From Oblivious Transfer
We consider the question of minimizing the round complexity of protocols for secure multiparty computation (MPC) with security against an arbitrary number of semi-honest parties. Very recently, Garg and Srinivasan (Eurocrypt 2018) and Benhamouda and Lin (Eurocrypt 2018) constructed such 2-round MPC protocols from minimal assumptions. This was done by showing a round preserving reduction to the task of secure 2-party computation of the oblivious transfer functionality (OT). These constructions made a novel non-black-box use of the underlying OT protocol. The question remained whether this can be done by only making black-box use of 2-round OT. This is of theoretical and potentially also practical value as black-box use of primitives tends to lead to more efficient constructions.
Our main result proves that such a black-box construction is impossible, namely that non-black-box use of OT is necessary. As a corollary, a similar separation holds when starting with any 2-party functionality other than OT.
As a secondary contribution, we prove several additional results that further clarify the landscape of black-box MPC with minimal interaction. In particular, we complement the separation from 2-party functionalities by presenting a complete 4-party functionality, give evidence for the difficulty of ruling out a complete 3-party functionality and for the difficulty of ruling out black-box constructions of 3-round MPC from 2-round OT, and separate a relaxed "non-compact" variant of 2-party homomorphic secret sharing from 2-round OT
Privacy-preserving machine learning for healthcare: open challenges and future perspectives
Machine Learning (ML) has recently shown tremendous success in modeling
various healthcare prediction tasks, ranging from disease diagnosis and
prognosis to patient treatment. Due to the sensitive nature of medical data,
privacy must be considered along the entire ML pipeline, from model training to
inference. In this paper, we conduct a review of recent literature concerning
Privacy-Preserving Machine Learning (PPML) for healthcare. We primarily focus
on privacy-preserving training and inference-as-a-service, and perform a
comprehensive review of existing trends, identify challenges, and discuss
opportunities for future research directions. The aim of this review is to
guide the development of private and efficient ML models in healthcare, with
the prospects of translating research efforts into real-world settings.Comment: ICLR 2023 Workshop on Trustworthy Machine Learning for Healthcare
(TML4H
Comparative Analysis of Data Security and Cloud Storage Models Using NSL KDD Dataset
Cloud computing is becoming increasingly important in many enterprises, and researchers are focusing on safeguarding cloud computing. Due to the extensive variety of service options it offers, A significant amount of interest from the scientific community has been focused on cloud computing. The two biggest problems with cloud computing are security and privacy. The key challenge is maintaining privacy, which expands rapidly with the number of users. A perfect security system must efficiently ensure each security aspect. This study provides a literature review illustrating the security in the cloud with respect to privacy, integrity, confidentiality and availability, and it also provides a comparison table illustrating the differences between various security and storage models with respect to the approaches and components of the models offered. This study also compares Naïve Bayes and SVM on the accuracy, recall and precision metrics using the NSL KDD dataset
- …