98,063 research outputs found
Soft quantification in statistical relational learning
We present a new statistical relational learning (SRL) framework that supports reasoning with soft quantifiers, such as "most" and "a few." We define the syntax and the semantics of this language, which we call , and present a most probable explanation inference algorithm for it. To the best of our knowledge, is the first SRL framework that combines soft quantifiers with first-order logic rules for modelling uncertain relational data. Our experimental results for two real-world applications, link prediction in social trust networks and user profiling in social networks, demonstrate that the use of soft quantifiers not only allows for a natural and intuitive formulation of domain knowledge, but also improves inference accuracy
CUE: An Uncertainty Interpretation Framework for Text Classifiers Built on Pre-Trained Language Models
Text classifiers built on Pre-trained Language Models (PLMs) have achieved
remarkable progress in various tasks including sentiment analysis, natural
language inference, and question-answering. However, the occurrence of
uncertain predictions by these classifiers poses a challenge to their
reliability when deployed in practical applications. Much effort has been
devoted to designing various probes in order to understand what PLMs capture.
But few studies have delved into factors influencing PLM-based classifiers'
predictive uncertainty. In this paper, we propose a novel framework, called
CUE, which aims to interpret uncertainties inherent in the predictions of
PLM-based models. In particular, we first map PLM-encoded representations to a
latent space via a variational auto-encoder. We then generate text
representations by perturbing the latent space which causes fluctuation in
predictive uncertainty. By comparing the difference in predictive uncertainty
between the perturbed and the original text representations, we are able to
identify the latent dimensions responsible for uncertainty and subsequently
trace back to the input features that contribute to such uncertainty. Our
extensive experiments on four benchmark datasets encompassing linguistic
acceptability classification, emotion classification, and natural language
inference show the feasibility of our proposed framework. Our source code is
available at: https://github.com/lijiazheng99/CUE.Comment: Accepted to UAI 202
Joint Modeling of Content and Discourse Relations in Dialogues
We present a joint modeling approach to identify salient discussion points in
spoken meetings as well as to label the discourse relations between speaker
turns. A variation of our model is also discussed when discourse relations are
treated as latent variables. Experimental results on two popular meeting
corpora show that our joint model can outperform state-of-the-art approaches
for both phrase-based content selection and discourse relation prediction
tasks. We also evaluate our model on predicting the consistency among team
members' understanding of their group decisions. Classifiers trained with
features constructed from our model achieve significant better predictive
performance than the state-of-the-art.Comment: Accepted by ACL 2017. 11 page
- …