426 research outputs found
On the relation between Differential Privacy and Quantitative Information Flow
Differential privacy is a notion that has emerged in the community of
statistical databases, as a response to the problem of protecting the privacy
of the database's participants when performing statistical queries. The idea is
that a randomized query satisfies differential privacy if the likelihood of
obtaining a certain answer for a database is not too different from the
likelihood of obtaining the same answer on adjacent databases, i.e. databases
which differ from for only one individual. Information flow is an area of
Security concerned with the problem of controlling the leakage of confidential
information in programs and protocols. Nowadays, one of the most established
approaches to quantify and to reason about leakage is based on the R\'enyi min
entropy version of information theory. In this paper, we analyze critically the
notion of differential privacy in light of the conceptual framework provided by
the R\'enyi min information theory. We show that there is a close relation
between differential privacy and leakage, due to the graph symmetries induced
by the adjacency relation. Furthermore, we consider the utility of the
randomized answer, which measures its expected degree of accuracy. We focus on
certain kinds of utility functions called "binary", which have a close
correspondence with the R\'enyi min mutual information. Again, it turns out
that there can be a tight correspondence between differential privacy and
utility, depending on the symmetries induced by the adjacency relation and by
the query. Depending on these symmetries we can also build an optimal-utility
randomization mechanism while preserving the required level of differential
privacy. Our main contribution is a study of the kind of structures that can be
induced by the adjacency relation and the query, and how to use them to derive
bounds on the leakage and achieve the optimal utility
Differentially Private Billing with Rebates
A number of established and novel business models are based on fine grained billing, including pay-per-view, mobile messaging, voice calls, pay-as-you-drive insurance, smart metering for utility provision, private computing clouds and hosted services. These models apply fine-grained tariffs dependent on time-of-use or place of-use to readings to compute a bill. We extend previously proposed billing protocols to strengthen their privacy in two key ways. First, we study the monetary amount a customer should add to their bill in order to provably hide their activities, within the differential privacy framework. Second, we propose a cryptographic protocol for oblivious billing that ensures any additional expenditure, aimed at protecting privacy, can be tracked and reclaimed in the future, thus minimising its cost. Our proposals can be used together or separately and are backed by provable guarantees of security. © 2011 Springer-Verlag
Modeling Bitcoin Contracts by Timed Automata
Bitcoin is a peer-to-peer cryptographic currency system. Since its
introduction in 2008, Bitcoin has gained noticeable popularity, mostly due to
its following properties: (1) the transaction fees are very low, and (2) it is
not controlled by any central authority, which in particular means that nobody
can "print" the money to generate inflation. Moreover, the transaction syntax
allows to create the so-called contracts, where a number of
mutually-distrusting parties engage in a protocol to jointly perform some
financial task, and the fairness of this process is guaranteed by the
properties of Bitcoin. Although the Bitcoin contracts have several potential
applications in the digital economy, so far they have not been widely used in
real life. This is partly due to the fact that they are cumbersome to create
and analyze, and hence risky to use.
In this paper we propose to remedy this problem by using the methods
originally developed for the computer-aided analysis for hardware and software
systems, in particular those based on the timed automata. More concretely, we
propose a framework for modeling the Bitcoin contracts using the timed automata
in the UPPAAL model checker. Our method is general and can be used to model
several contracts. As a proof-of-concept we use this framework to model some of
the Bitcoin contracts from our recent previous work. We then automatically
verify their security in UPPAAL, finding (and correcting) some subtle errors
that were difficult to spot by the manual analysis. We hope that our work can
draw the attention of the researchers working on formal modeling to the problem
of the Bitcoin contract verification, and spark off more research on this
topic
Quantitative Chevalley-Weil theorem for curves
The classical Chevalley-Weil theorem asserts that for an \'etale covering of
projective varieties over a number field K, the discriminant of the field of
definition of the fiber over a K-rational point is uniformly bounded. We obtain
a fully explicit version of this theorem in dimension 1.Comment: version 4: minor inaccuracies in Lemma 3.4 and Proposition 5.2
correcte
Quantitative Information Flow and Applications to Differential Privacy
International audienceSecure information flow is the problem of ensuring that the information made publicly available by a computational system does not leak information that should be kept secret. Since it is practically impossible to avoid leakage entirely, in recent years there has been a growing interest in considering the quantitative aspects of information flow, in order to measure and compare the amount of leakage. Information theory is widely regarded as a natural framework to provide firm foundations to quantitative information flow. In this notes we review the two main information-theoretic approaches that have been investigated: the one based on Shannon entropy, and the one based on Rényi min-entropy. Furthermore, we discuss some applications in the area of privacy. In particular, we consider statistical databases and the recently-proposed notion of differential privacy. Using the information-theoretic view, we discuss the bound that differential privacy induces on leakage, and the trade-off between utility and privac
On the Round Complexity of the Shuffle Model
The shuffle model of differential privacy was proposed as a viable model for
performing distributed differentially private computations. Informally, the
model consists of an untrusted analyzer that receives messages sent by
participating parties via a shuffle functionality, the latter potentially
disassociates messages from their senders. Prior work focused on one-round
differentially private shuffle model protocols, demonstrating that
functionalities such as addition and histograms can be performed in this model
with accuracy levels similar to that of the curator model of differential
privacy, where the computation is performed by a fully trusted party.
Focusing on the round complexity of the shuffle model, we ask in this work
what can be computed in the shuffle model of differential privacy with two
rounds. Ishai et al. [FOCS 2006] showed how to use one round of the shuffle to
establish secret keys between every two parties. Using this primitive to
simulate a general secure multi-party protocol increases its round complexity
by one. We show how two parties can use one round of the shuffle to send secret
messages without having to first establish a secret key, hence retaining round
complexity. Combining this primitive with the two-round semi-honest protocol of
Applebaun et al. [TCC 2018], we obtain that every randomized functionality can
be computed in the shuffle model with an honest majority, in merely two rounds.
This includes any differentially private computation. We then move to examine
differentially private computations in the shuffle model that (i) do not
require the assumption of an honest majority, or (ii) do not admit one-round
protocols, even with an honest majority. For that, we introduce two
computational tasks: the common-element problem and the nested-common-element
problem, for which we show separations between one-round and two-round
protocols
Real-Time Vector Automata
We study the computational power of real-time finite automata that have been
augmented with a vector of dimension k, and programmed to multiply this vector
at each step by an appropriately selected matrix. Only one entry
of the vector can be tested for equality to 1 at any time. Classes of languages
recognized by deterministic, nondeterministic, and "blind" versions of these
machines are studied and compared with each other, and the associated classes
for multicounter automata, automata with multiplication, and generalized finite
automata.Comment: 14 page
Synthetic sequence generator for recommender systems - memory biased random walk on sequence multilayer network
Personalized recommender systems rely on each user's personal usage data in
the system, in order to assist in decision making. However, privacy policies
protecting users' rights prevent these highly personal data from being publicly
available to a wider researcher audience. In this work, we propose a memory
biased random walk model on multilayer sequence network, as a generator of
synthetic sequential data for recommender systems. We demonstrate the
applicability of the synthetic data in training recommender system models for
cases when privacy policies restrict clickstream publishing.Comment: The new updated version of the pape
Secret-Sharing for NP
A computational secret-sharing scheme is a method that enables a dealer, that
has a secret, to distribute this secret among a set of parties such that a
"qualified" subset of parties can efficiently reconstruct the secret while any
"unqualified" subset of parties cannot efficiently learn anything about the
secret. The collection of "qualified" subsets is defined by a Boolean function.
It has been a major open problem to understand which (monotone) functions can
be realized by a computational secret-sharing schemes. Yao suggested a method
for secret-sharing for any function that has a polynomial-size monotone circuit
(a class which is strictly smaller than the class of monotone functions in P).
Around 1990 Rudich raised the possibility of obtaining secret-sharing for all
monotone functions in NP: In order to reconstruct the secret a set of parties
must be "qualified" and provide a witness attesting to this fact.
Recently, Garg et al. (STOC 2013) put forward the concept of witness
encryption, where the goal is to encrypt a message relative to a statement "x
in L" for a language L in NP such that anyone holding a witness to the
statement can decrypt the message, however, if x is not in L, then it is
computationally hard to decrypt. Garg et al. showed how to construct several
cryptographic primitives from witness encryption and gave a candidate
construction.
One can show that computational secret-sharing implies witness encryption for
the same language. Our main result is the converse: we give a construction of a
computational secret-sharing scheme for any monotone function in NP assuming
witness encryption for NP and one-way functions. As a consequence we get a
completeness theorem for secret-sharing: computational secret-sharing scheme
for any single monotone NP-complete function implies a computational
secret-sharing scheme for every monotone function in NP
Order-Revealing Encryption and the Hardness of Private Learning
An order-revealing encryption scheme gives a public procedure by which two
ciphertexts can be compared to reveal the ordering of their underlying
plaintexts. We show how to use order-revealing encryption to separate
computationally efficient PAC learning from efficient -differentially private PAC learning. That is, we construct a concept
class that is efficiently PAC learnable, but for which every efficient learner
fails to be differentially private. This answers a question of Kasiviswanathan
et al. (FOCS '08, SIAM J. Comput. '11).
To prove our result, we give a generic transformation from an order-revealing
encryption scheme into one with strongly correct comparison, which enables the
consistent comparison of ciphertexts that are not obtained as the valid
encryption of any message. We believe this construction may be of independent
interest.Comment: 28 page
- …