Structure-based models in the molecular sciences can be highly sensitive to
input geometries and give predictions with large variance under subtle
coordinate perturbations. We present an approach to mitigate this failure mode
by generating conformations that explicitly minimize uncertainty in a
predictive model. To achieve this, we compute differentiable estimates of
aleatoric \textit{and} epistemic uncertainties directly from learned
embeddings. We then train an optimizer that iteratively samples embeddings to
reduce these uncertainties according to their gradients. As our predictive
model is constructed as a variational autoencoder, the new embeddings can be
decoded to their corresponding inputs, which we call \textit{MoleCLUEs}, or
(molecular) counterfactual latent uncertainty explanations
\citep{antoran2020getting}. We provide results of our algorithm for the task of
predicting drug properties with maximum confidence as well as analysis of the
differentiable structure simulations.Comment: Submitted to the Differentiable Almost Everything Workshop, ICML 202