The correlations and network structure amongst individuals in datasets
today---whether explicitly articulated, or deduced from biological or
behavioral connections---pose new issues around privacy guarantees, because of
inferences that can be made about one individual from another's data. This
motivates quantifying privacy in networked contexts in terms of "inferential
privacy"---which measures the change in beliefs about an individual's data from
the result of a computation---as originally proposed by Dalenius in the 1970's.
Inferential privacy is implied by differential privacy when data are
independent, but can be much worse when data are correlated; indeed, simple
examples, as well as a general impossibility theorem of Dwork and Naor,
preclude the possibility of achieving non-trivial inferential privacy when the
adversary can have arbitrary auxiliary information. In this paper, we ask how
differential privacy guarantees translate to guarantees on inferential privacy
in networked contexts: specifically, under what limitations on the adversary's
information about correlations, modeled as a prior distribution over datasets,
can we deduce an inferential guarantee from a differential one?
We prove two main results. The first result pertains to distributions that
satisfy a natural positive-affiliation condition, and gives an upper bound on
the inferential privacy guarantee for any differentially private mechanism.
This upper bound is matched by a simple mechanism that adds Laplace noise to
the sum of the data. The second result pertains to distributions that have weak
correlations, defined in terms of a suitable "influence matrix". The result
provides an upper bound for inferential privacy in terms of the differential
privacy parameter and the spectral norm of this matrix