Differential privacy is a notion of privacy that has become very popular in
the database community. Roughly, the idea is that a randomized query mechanism
provides sufficient privacy protection if the ratio between the probabilities
that two adjacent datasets give the same answer is bound by e^epsilon. In the
field of information flow there is a similar concern for controlling
information leakage, i.e. limiting the possibility of inferring the secret
information from the observables. In recent years, researchers have proposed to
quantify the leakage in terms of R\'enyi min mutual information, a notion
strictly related to the Bayes risk. In this paper, we show how to model the
query system in terms of an information-theoretic channel, and we compare the
notion of differential privacy with that of mutual information. We show that
differential privacy implies a bound on the mutual information (but not
vice-versa). Furthermore, we show that our bound is tight. Then, we consider
the utility of the randomization mechanism, which represents how close the
randomized answers are, in average, to the real ones. We show that the notion
of differential privacy implies a bound on utility, also tight, and we propose
a method that under certain conditions builds an optimal randomization
mechanism, i.e. a mechanism which provides the best utility while guaranteeing
differential privacy.Comment: 30 pages; HAL repositor