Explanations have gained an increasing level of interest in the AI and
Machine Learning (ML) communities in order to improve model transparency and
allow users to form a mental model of a trained ML model. However, explanations
can go beyond this one way communication as a mechanism to elicit user control,
because once users understand, they can then provide feedback. The goal of this
paper is to present an overview of research where explanations are combined
with interactive capabilities as a mean to learn new models from scratch and to
edit and debug existing ones. To this end, we draw a conceptual map of the
state-of-the-art, grouping relevant approaches based on their intended purpose
and on how they structure the interaction, highlighting similarities and
differences between them. We also discuss open research issues and outline
possible directions forward, with the hope of spurring further research on this
blooming research topic