We present the design of a spoken dialogue system to provide feedback to users of an autonomous system which can learn different patterns associated with user actions. Our speech interface allows users to verbally refine these patterns, giving the system his/her feedback about the accuracy of the actions learnt.We focus on improving the naturalness of user interventions, using a stochastic language model and a rule-based language understanding module. The development of a state-based di- alogue manager which decides how to conduct each dialogue, together with the storage of contextual information of previous dialogue turns, allows the user to speak to the system in a highly natural way