Negation and uncertainty modeling are long-standing tasks in natural language
processing. Linguistic theory postulates that expressions of negation and
uncertainty are semantically independent from each other and the content they
modify. However, previous works on representation learning do not explicitly
model this independence. We therefore attempt to disentangle the
representations of negation, uncertainty, and content using a Variational
Autoencoder. We find that simply supervising the latent representations results
in good disentanglement, but auxiliary objectives based on adversarial learning
and mutual information minimization can provide additional disentanglement
gains.Comment: Accepted to ACL 2022. 18 pages, 7 figures. Code and data are
available at https://github.com/jvasilakes/disentanglement-va