3 research outputs found
ExClaim: Explainable Neural Claim Verification Using Rationalization
With the advent of deep learning, text generation language models have
improved dramatically, with text at a similar level as human-written text. This
can lead to rampant misinformation because content can now be created cheaply
and distributed quickly. Automated claim verification methods exist to validate
claims, but they lack foundational data and often use mainstream news as
evidence sources that are strongly biased towards a specific agenda. Current
claim verification methods use deep neural network models and complex
algorithms for a high classification accuracy but it is at the expense of model
explainability. The models are black-boxes and their decision-making process
and the steps it took to arrive at a final prediction are obfuscated from the
user. We introduce a novel claim verification approach, namely: ExClaim, that
attempts to provide an explainable claim verification system with foundational
evidence. Inspired by the legal system, ExClaim leverages rationalization to
provide a verdict for the claim and justifies the verdict through a natural
language explanation (rationale) to describe the model's decision-making
process. ExClaim treats the verdict classification task as a question-answer
problem and achieves a performance of 0.93 F1 score. It provides subtasks
explanations to also justify the intermediate outcomes. Statistical and
Explainable AI (XAI) evaluations are conducted to ensure valid and trustworthy
outcomes. Ensuring claim verification systems are assured, rational, and
explainable is an essential step toward improving Human-AI trust and the
accessibility of black-box systems.Comment: Published at 2022 IEEE 29th ST
Rationalization for Explainable NLP: A Survey
Recent advances in deep learning have improved the performance of many
Natural Language Processing (NLP) tasks such as translation,
question-answering, and text classification. However, this improvement comes at
the expense of model explainability. Black-box models make it difficult to
understand the internals of a system and the process it takes to arrive at an
output. Numerical (LIME, Shapley) and visualization (saliency heatmap)
explainability techniques are helpful; however, they are insufficient because
they require specialized knowledge. These factors led rationalization to emerge
as a more accessible explainable technique in NLP. Rationalization justifies a
model's output by providing a natural language explanation (rationale). Recent
improvements in natural language generation have made rationalization an
attractive technique because it is intuitive, human-comprehensible, and
accessible to non-technical users. Since rationalization is a relatively new
field, it is disorganized. As the first survey, rationalization literature in
NLP from 2007-2022 is analyzed. This survey presents available methods,
explainable evaluations, code, and datasets used across various NLP tasks that
use rationalization. Further, a new subfield in Explainable AI (XAI), namely,
Rational AI (RAI), is introduced to advance the current state of
rationalization. A discussion on observed insights, challenges, and future
directions is provided to point to promising research opportunities