1 research outputs found
Whatcha lookin' at? DeepLIFTing BERT's Attention in Question Answering
There has been great success recently in tackling challenging NLP tasks by
neural networks which have been pre-trained and fine-tuned on large amounts of
task data. In this paper, we investigate one such model, BERT for
question-answering, with the aim to analyze why it is able to achieve
significantly better results than other models. We run DeepLIFT on the model
predictions and test the outcomes to monitor shift in the attention values for
input. We also cluster the results to analyze any possible patterns similar to
human reasoning depending on the kind of input paragraph and question the model
is trying to answer.Comment: 6 pages, 13 figure