Flexible Model Interpretability through Natural Language Model Editing

Abstract

Model interpretability and model editing are crucial goals in the age of large language models. Interestingly, there exists a link between these two goals: if a method is able to systematically edit model behavior with regard to a human concept of interest, this editor method can help make internal representations more interpretable by pointing towards relevant representations and systematically manipulating them.Comment: Extended Abstract -- work in progress. BlackboxNLP202

    Similar works

    Full text

    thumbnail-image

    Available Versions