942 research outputs found
Comparing the expressive power of the Synchronous and the Asynchronous pi-calculus
The Asynchronous pi-calculus, as recently proposed by Boudol and,
independently, by Honda and Tokoro, is a subset of the pi-calculus which
contains no explicit operators for choice and output-prefixing. The
communication mechanism of this calculus, however, is powerful enough to
simulate output-prefixing, as shown by Boudol, and input-guarded choice, as
shown recently by Nestmann and Pierce. A natural question arises, then, whether
or not it is possible to embed in it the full pi-calculus. We show that this is
not possible, i.e. there does not exist any uniform, parallel-preserving,
translation from the pi-calculus into the asynchronous pi-calculus, up to any
``reasonable'' notion of equivalence. This result is based on the incapablity
of the asynchronous pi-calculus of breaking certain symmetries possibly present
in the initial communication graph. By similar arguments, we prove a separation
result between the pi-calculus and CCS.Comment: 10 pages. Proc. of the POPL'97 symposiu
Making Random Choices Invisible to the Scheduler
When dealing with process calculi and automata which express both
nondeterministic and probabilistic behavior, it is customary to introduce the
notion of scheduler to solve the nondeterminism. It has been observed that for
certain applications, notably those in security, the scheduler needs to be
restricted so not to reveal the outcome of the protocol's random choices, or
otherwise the model of adversary would be too strong even for ``obviously
correct'' protocols. We propose a process-algebraic framework in which the
control on the scheduler can be specified in syntactic terms, and we show how
to apply it to solve the problem mentioned above. We also consider the
definition of (probabilistic) may and must preorders, and we show that they are
precongruences with respect to the restricted schedulers. Furthermore, we show
that all the operators of the language, except replication, distribute over
probabilistic summation, which is a useful property for verification
Constructing elastic distinguishability metrics for location privacy
With the increasing popularity of hand-held devices, location-based
applications and services have access to accurate and real-time location
information, raising serious privacy concerns for their users. The recently
introduced notion of geo-indistinguishability tries to address this problem by
adapting the well-known concept of differential privacy to the area of
location-based systems. Although geo-indistinguishability presents various
appealing aspects, it has the problem of treating space in a uniform way,
imposing the addition of the same amount of noise everywhere on the map. In
this paper we propose a novel elastic distinguishability metric that warps the
geometrical distance, capturing the different degrees of density of each area.
As a consequence, the obtained mechanism adapts the level of noise while
achieving the same degree of privacy everywhere. We also show how such an
elastic metric can easily incorporate the concept of a "geographic fence" that
is commonly employed to protect the highly recurrent locations of a user, such
as his home or work. We perform an extensive evaluation of our technique by
building an elastic metric for Paris' wide metropolitan area, using semantic
information from the OpenStreetMap database. We compare the resulting mechanism
against the Planar Laplace mechanism satisfying standard
geo-indistinguishability, using two real-world datasets from the Gowalla and
Brightkite location-based social networks. The results show that the elastic
mechanism adapts well to the semantics of each area, adjusting the noise as we
move outside the city center, hence offering better overall privacy
Model checking probabilistic and stochastic extensions of the pi-calculus
We present an implementation of model checking for probabilistic and stochastic extensions of the pi-calculus, a process algebra which supports modelling of concurrency and mobility. Formal verification techniques for such extensions have clear applications in several domains, including mobile ad-hoc network protocols, probabilistic security protocols and biological pathways. Despite this, no implementation of automated verification exists. Building upon the pi-calculus model checker MMC, we first show an automated procedure for constructing the underlying semantic model of a probabilistic or stochastic pi-calculus process. This can then be verified using existing probabilistic model checkers such as PRISM. Secondly, we demonstrate how for processes of a specific structure a more efficient, compositional approach is applicable, which uses our extension of MMC on each parallel component of the system and then translates the results into a high-level modular description for the PRISM tool. The feasibility of our techniques is demonstrated through a number of case studies from the pi-calculus literature
Differential Privacy versus Quantitative Information Flow
Differential privacy is a notion of privacy that has become very popular in
the database community. Roughly, the idea is that a randomized query mechanism
provides sufficient privacy protection if the ratio between the probabilities
of two different entries to originate a certain answer is bound by e^\epsilon.
In the fields of anonymity and information flow there is a similar concern for
controlling information leakage, i.e. limiting the possibility of inferring the
secret information from the observables. In recent years, researchers have
proposed to quantify the leakage in terms of the information-theoretic notion
of mutual information. There are two main approaches that fall in this
category: One based on Shannon entropy, and one based on R\'enyi's min entropy.
The latter has connection with the so-called Bayes risk, which expresses the
probability of guessing the secret. In this paper, we show how to model the
query system in terms of an information-theoretic channel, and we compare the
notion of differential privacy with that of mutual information. We show that
the notion of differential privacy is strictly stronger, in the sense that it
implies a bound on the mutual information, but not viceversa
Differential Privacy for Relational Algebra: Improving the Sensitivity Bounds via Constraint Systems
Differential privacy is a modern approach in privacy-preserving data analysis
to control the amount of information that can be inferred about an individual
by querying a database. The most common techniques are based on the
introduction of probabilistic noise, often defined as a Laplacian parametric on
the sensitivity of the query. In order to maximize the utility of the query, it
is crucial to estimate the sensitivity as precisely as possible.
In this paper we consider relational algebra, the classical language for
queries in relational databases, and we propose a method for computing a bound
on the sensitivity of queries in an intuitive and compositional way. We use
constraint-based techniques to accumulate the information on the possible
values for attributes provided by the various components of the query, thus
making it possible to compute tight bounds on the sensitivity.Comment: In Proceedings QAPL 2012, arXiv:1207.055
- …