Generalized variational inference (GVI) provides an optimization-theoretic
framework for statistical estimation that encapsulates many traditional
estimation procedures. The typical GVI problem is to compute a distribution of
parameters that maximizes the expected payoff minus the divergence of the
distribution from a specified prior. In this way, GVI enables likelihood-free
estimation with the ability to control the influence of the prior by tuning the
so-called learning rate. Recently, GVI was shown to outperform traditional
Bayesian inference when the model and prior distribution are misspecified. In
this paper, we introduce and analyze a new GVI formulation based on utility
theory and risk management. Our formulation is to maximize the expected payoff
while enforcing constraints on the maximizing distribution. We recover the
original GVI distribution by choosing the feasible set to include a constraint
on the divergence of the distribution from the prior. In doing so, we
automatically determine the learning rate as the Lagrange multiplier for the
constraint. In this setting, we are able to transform the infinite-dimensional
estimation problem into a two-dimensional convex program. This reformulation
further provides an analytic expression for the optimal density of parameters.
In addition, we prove asymptotic consistency results for empirical
approximations of our optimal distributions. Throughout, we draw connections
between our estimation procedure and risk management. In fact, we demonstrate
that our estimation procedure is equivalent to evaluating a risk measure. We
test our procedure on an estimation problem with a misspecified model and prior
distribution, and conclude with some extensions of our approach