For a Bayesian, the task to define the likelihood can be as perplexing as the
task to define the prior. We focus on situations when the parameter of interest
has been emancipated from the likelihood and is linked to data directly through
a loss function. We survey existing work on both Bayesian parametric inference
with Gibbs posteriors as well as Bayesian non-parametric inference. We then
highlight recent bootstrap computational approaches to approximating
loss-driven posteriors. In particular, we focus on implicit bootstrap
distributions defined through an underlying push-forward mapping. We
investigate iid samplers from approximate posteriors that pass random bootstrap
weights trough a trained generative network. After training the deep-learning
mapping, the simulation cost of such iid samplers is negligible. We compare the
performance of these deep bootstrap samplers with exact bootstrap as well as
MCMC on several examples (including support vector machines or quantile
regression). We also provide theoretical insights into bootstrap posteriors by
drawing upon connections to model mis-specification