Explanations of neural models aim to reveal a model's decision-making process
for its predictions. However, recent work shows that current methods giving
explanations such as saliency maps or counterfactuals can be misleading, as
they are prone to present reasons that are unfaithful to the model's inner
workings. This work explores the challenging question of evaluating the
faithfulness of natural language explanations (NLEs). To this end, we present
two tests. First, we propose a counterfactual input editor for inserting
reasons that lead to counterfactual predictions but are not reflected by the
NLEs. Second, we reconstruct inputs from the reasons stated in the generated
NLEs and check how often they lead to the same predictions. Our tests can
evaluate emerging NLE models, proving a fundamental tool in the development of
faithful NLEs