Developers of intelligent tutoring systems would like to know what human tutors do and which activities are responsible for their success in tutoring. We address these questions by comparing episodes where tutoring does and does not cause learning. Approximately 125 hr of tutorial dialog between expert human tutors and physics students are analyzed to see what features of the dialog are associated with learning. Successful learning appears to require that the student reach an impasse. When students were not at an impasse, learning was uncommon regardless of the tutorial explanations employed. On the other hand, once students were at an impasse, tutorial explanations were sometimes associated with learning. Moreover, for different types of knowledge, different types of tutorial explanations were associated with learning different types of knowledge. In principle, advances in Artificial Intelligence (AI) should make it easy to build tutoring systems that emulate human tutors. However, it is not yet clear what human tutors do. Although an initial picture exists due to existing studies, one goal of this research is to add more detail to this picture. We argue that it makes sense Requests for reprints should be sent to Kurt VanLehn, Learning Research and Development Center
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.