Automatic fact verification has received significant attention recently.
Contemporary automatic fact-checking systems focus on estimating truthfulness
using numerical scores which are not human-interpretable. A human fact-checker
generally follows several logical steps to verify a verisimilitude claim and
conclude whether its truthful or a mere masquerade. Popular fact-checking
websites follow a common structure for fact categorization such as half true,
half false, false, pants on fire, etc. Therefore, it is necessary to have an
aspect-based (delineating which part(s) are true and which are false)
explainable system that can assist human fact-checkers in asking relevant
questions related to a fact, which can then be validated separately to reach a
final verdict. In this paper, we propose a 5W framework (who, what, when,
where, and why) for question-answer-based fact explainability. To that end, we
present a semi-automatically generated dataset called FACTIFY-5WQA, which
consists of 391, 041 facts along with relevant 5W QAs - underscoring our major
contribution to this paper. A semantic role labeling system has been utilized
to locate 5Ws, which generates QA pairs for claims using a masked language
model. Finally, we report a baseline QA system to automatically locate those
answers from evidence documents, which can serve as a baseline for future
research in the field. Lastly, we propose a robust fact verification system
that takes paraphrased claims and automatically validates them. The dataset and
the baseline model are available at https: //github.com/ankuranii/acl-5W-QAComment: Accepted at ACL main conference 202