51,045 research outputs found
Credit Network Payment Systems: Security, Privacy and Decentralization
A credit network models transitive trust between users and enables transactions between arbitrary pairs of users. With their flexible design and robustness against intrusions, credit networks form the basis of Sybil-tolerant social networks, spam-resistant communication protocols, and payment settlement systems. For instance, the Ripple credit network is used today by various banks worldwide as their backbone for cross-currency transactions. Open credit networks, however, expose users’ credit links as well as the transaction volumes to the public. This raises a significant privacy concern, which has largely been ignored by the research on credit networks so far.
In this state of affairs, this dissertation makes the following contributions. First, we perform a thorough study of the Ripple network that analyzes and characterizes its security and privacy issues. Second, we define a formal model for the security and privacy notions of interest in a credit network. This model lays the foundations for secure and privacy-preserving credit networks. Third, we build PathShuffle, the first protocol for atomic and anonymous transactions in credit networks that is fully compatible with the currently deployed Ripple and Stellar credit networks. Finally, we build SilentWhispers, the first provably secure and privacy-preserving transaction protocol for decentralized credit networks. SilentWhispers can be used to simulate Ripple transactions while preserving the expected security and privacy guarantees
Privacy and Confidentiality in an e-Commerce World: Data Mining, Data Warehousing, Matching and Disclosure Limitation
The growing expanse of e-commerce and the widespread availability of online
databases raise many fears regarding loss of privacy and many statistical
challenges. Even with encryption and other nominal forms of protection for
individual databases, we still need to protect against the violation of privacy
through linkages across multiple databases. These issues parallel those that
have arisen and received some attention in the context of homeland security.
Following the events of September 11, 2001, there has been heightened attention
in the United States and elsewhere to the use of multiple government and
private databases for the identification of possible perpetrators of future
attacks, as well as an unprecedented expansion of federal government data
mining activities, many involving databases containing personal information. We
present an overview of some proposals that have surfaced for the search of
multiple databases which supposedly do not compromise possible pledges of
confidentiality to the individuals whose data are included. We also explore
their link to the related literature on privacy-preserving data mining. In
particular, we focus on the matching problem across databases and the concept
of ``selective revelation'' and their confidentiality implications.Comment: Published at http://dx.doi.org/10.1214/088342306000000240 in the
Statistical Science (http://www.imstat.org/sts/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Conclave: secure multi-party computation on big data (extended TR)
Secure Multi-Party Computation (MPC) allows mutually distrusting parties to
run joint computations without revealing private data. Current MPC algorithms
scale poorly with data size, which makes MPC on "big data" prohibitively slow
and inhibits its practical use.
Many relational analytics queries can maintain MPC's end-to-end security
guarantee without using cryptographic MPC techniques for all operations.
Conclave is a query compiler that accelerates such queries by transforming them
into a combination of data-parallel, local cleartext processing and small MPC
steps. When parties trust others with specific subsets of the data, Conclave
applies new hybrid MPC-cleartext protocols to run additional steps outside of
MPC and improve scalability further.
Our Conclave prototype generates code for cleartext processing in Python and
Spark, and for secure MPC using the Sharemind and Obliv-C frameworks. Conclave
scales to data sets between three and six orders of magnitude larger than
state-of-the-art MPC frameworks support on their own. Thanks to its hybrid
protocols, Conclave also substantially outperforms SMCQL, the most similar
existing system.Comment: Extended technical report for EuroSys 2019 pape
Privacy-Preserving Trust Management Mechanisms from Private Matching Schemes
Cryptographic primitives are essential for constructing privacy-preserving
communication mechanisms. There are situations in which two parties that do not
know each other need to exchange sensitive information on the Internet. Trust
management mechanisms make use of digital credentials and certificates in order
to establish trust among these strangers. We address the problem of choosing
which credentials are exchanged. During this process, each party should learn
no information about the preferences of the other party other than strictly
required for trust establishment. We present a method to reach an agreement on
the credentials to be exchanged that preserves the privacy of the parties. Our
method is based on secure two-party computation protocols for set intersection.
Namely, it is constructed from private matching schemes.Comment: The material in this paper will be presented in part at the 8th DPM
International Workshop on Data Privacy Management (DPM 2013
When the signal is in the noise: Exploiting Diffix's Sticky Noise
Anonymized data is highly valuable to both businesses and researchers. A
large body of research has however shown the strong limits of the
de-identification release-and-forget model, where data is anonymized and
shared. This has led to the development of privacy-preserving query-based
systems. Based on the idea of "sticky noise", Diffix has been recently proposed
as a novel query-based mechanism satisfying alone the EU Article~29 Working
Party's definition of anonymization. According to its authors, Diffix adds less
noise to answers than solutions based on differential privacy while allowing
for an unlimited number of queries.
This paper presents a new class of noise-exploitation attacks, exploiting the
noise added by the system to infer private information about individuals in the
dataset. Our first differential attack uses samples extracted from Diffix in a
likelihood ratio test to discriminate between two probability distributions. We
show that using this attack against a synthetic best-case dataset allows us to
infer private information with 89.4% accuracy using only 5 attributes. Our
second cloning attack uses dummy conditions that conditionally strongly affect
the output of the query depending on the value of the private attribute. Using
this attack on four real-world datasets, we show that we can infer private
attributes of at least 93% of the users in the dataset with accuracy between
93.3% and 97.1%, issuing a median of 304 queries per user. We show how to
optimize this attack, targeting 55.4% of the users and achieving 91.7%
accuracy, using a maximum of only 32 queries per user.
Our attacks demonstrate that adding data-dependent noise, as done by Diffix,
is not sufficient to prevent inference of private attributes. We furthermore
argue that Diffix alone fails to satisfy Art. 29 WP's definition of
anonymization. [...
- …