Characterizing the privacy degradation over compositions, i.e., privacy
accounting, is a fundamental topic in differential privacy (DP) with many
applications to differentially private machine learning and federated learning.
We propose a unification of recent advances (Renyi DP, privacy profiles, f-DP
and the PLD formalism) via the \emph{characteristic function} (Ï•-function)
of a certain \emph{dominating} privacy loss random variable. We show that our
approach allows \emph{natural} adaptive composition like Renyi DP, provides
\emph{exactly tight} privacy accounting like PLD, and can be (often
\emph{losslessly}) converted to privacy profile and f-DP, thus providing
(ϵ,δ)-DP guarantees and interpretable tradeoff functions.
Algorithmically, we propose an \emph{analytical Fourier accountant} that
represents the \emph{complex} logarithm of Ï•-functions symbolically and
uses Gaussian quadrature for numerical computation. On several popular DP
mechanisms and their subsampled counterparts, we demonstrate the flexibility
and tightness of our approach in theory and experiments