2,755 research outputs found
Context-Aware Generative Adversarial Privacy
Preserving the utility of published datasets while simultaneously providing
provable privacy guarantees is a well-known challenge. On the one hand,
context-free privacy solutions, such as differential privacy, provide strong
privacy guarantees, but often lead to a significant reduction in utility. On
the other hand, context-aware privacy solutions, such as information theoretic
privacy, achieve an improved privacy-utility tradeoff, but assume that the data
holder has access to dataset statistics. We circumvent these limitations by
introducing a novel context-aware privacy framework called generative
adversarial privacy (GAP). GAP leverages recent advancements in generative
adversarial networks (GANs) to allow the data holder to learn privatization
schemes from the dataset itself. Under GAP, learning the privacy mechanism is
formulated as a constrained minimax game between two players: a privatizer that
sanitizes the dataset in a way that limits the risk of inference attacks on the
individuals' private variables, and an adversary that tries to infer the
private variables from the sanitized dataset. To evaluate GAP's performance, we
investigate two simple (yet canonical) statistical dataset models: (a) the
binary data model, and (b) the binary Gaussian mixture model. For both models,
we derive game-theoretically optimal minimax privacy mechanisms, and show that
the privacy mechanisms learned from data (in a generative adversarial fashion)
match the theoretically optimal ones. This demonstrates that our framework can
be easily applied in practice, even in the absence of dataset statistics.Comment: Improved version of a paper accepted by Entropy Journal, Special
Issue on Information Theory in Machine Learning and Data Scienc
Towards Machine Wald
The past century has seen a steady increase in the need of estimating and
predicting complex systems and making (possibly critical) decisions with
limited information. Although computers have made possible the numerical
evaluation of sophisticated statistical models, these models are still designed
\emph{by humans} because there is currently no known recipe or algorithm for
dividing the design of a statistical model into a sequence of arithmetic
operations. Indeed enabling computers to \emph{think} as \emph{humans} have the
ability to do when faced with uncertainty is challenging in several major ways:
(1) Finding optimal statistical models remains to be formulated as a well posed
problem when information on the system of interest is incomplete and comes in
the form of a complex combination of sample data, partial knowledge of
constitutive relations and a limited description of the distribution of input
random variables. (2) The space of admissible scenarios along with the space of
relevant information, assumptions, and/or beliefs, tend to be infinite
dimensional, whereas calculus on a computer is necessarily discrete and finite.
With this purpose, this paper explores the foundations of a rigorous framework
for the scientific computation of optimal statistical estimators/models and
reviews their connections with Decision Theory, Machine Learning, Bayesian
Inference, Stochastic Optimization, Robust Optimization, Optimal Uncertainty
Quantification and Information Based Complexity.Comment: 37 page
From Wald to Savage: homo economicus becomes a Bayesian statistician
Bayesian rationality is the paradigm of rational behavior in neoclassical economics. A rational agent in an economic model is one who maximizes her subjective expected utility and consistently revises her beliefs according to Bayesâs rule. The paper raises the question of how, when and why this characterization of rationality came to be endorsed by mainstream economists. Though no definitive answer is provided, it is argued that the question is far from trivial and of great historiographic importance. The story begins with Abraham Waldâs behaviorist approach to statistics and culminates with Leonard J. Savageâs elaboration of subjective expected utility theory in his 1954 classic The Foundations of Statistics. It is the latterâs acknowledged fiasco to achieve its planned goal, the reinterpretation of traditional inferential techniques along subjectivist and behaviorist lines, which raises the puzzle of how a failed project in statistics could turn into such a tremendous hit in economics. A couple of tentative answers are also offered, involving the role of the consistency requirement in neoclassical analysis and the impact of the postwar transformation of US business schools.Savage, Wald, rational behavior, Bayesian decision theory, subjective probability, minimax rule, statistical decision functions, neoclassical economics
Robust State Space Filtering under Incremental Model Perturbations Subject to a Relative Entropy Tolerance
This paper considers robust filtering for a nominal Gaussian state-space
model, when a relative entropy tolerance is applied to each time increment of a
dynamical model. The problem is formulated as a dynamic minimax game where the
maximizer adopts a myopic strategy. This game is shown to admit a saddle point
whose structure is characterized by applying and extending results presented
earlier in [1] for static least-squares estimation. The resulting minimax
filter takes the form of a risk-sensitive filter with a time varying risk
sensitivity parameter, which depends on the tolerance bound applied to the
model dynamics and observations at the corresponding time index. The
least-favorable model is constructed and used to evaluate the performance of
alternative filters. Simulations comparing the proposed risk-sensitive filter
to a standard Kalman filter show a significant performance advantage when
applied to the least-favorable model, and only a small performance loss for the
nominal model
Cores of Cooperative Games in Information Theory
Cores of cooperative games are ubiquitous in information theory, and arise
most frequently in the characterization of fundamental limits in various
scenarios involving multiple users. Examples include classical settings in
network information theory such as Slepian-Wolf source coding and multiple
access channels, classical settings in statistics such as robust hypothesis
testing, and new settings at the intersection of networking and statistics such
as distributed estimation problems for sensor networks. Cooperative game theory
allows one to understand aspects of all of these problems from a fresh and
unifying perspective that treats users as players in a game, sometimes leading
to new insights. At the heart of these analyses are fundamental dualities that
have been long studied in the context of cooperative games; for information
theoretic purposes, these are dualities between information inequalities on the
one hand and properties of rate, capacity or other resource allocation regions
on the other.Comment: 12 pages, published at
http://www.hindawi.com/GetArticle.aspx?doi=10.1155/2008/318704 in EURASIP
Journal on Wireless Communications and Networking, Special Issue on "Theory
and Applications in Multiuser/Multiterminal Communications", April 200
- âŠ