While automatic dialogue tutors hold great potential in making education
personalized and more accessible, research on such systems has been hampered by
a lack of sufficiently large and high-quality datasets. Collecting such
datasets remains challenging, as recording tutoring sessions raises privacy
concerns and crowdsourcing leads to insufficient data quality. To address this,
we propose a framework to generate such dialogues by pairing human teachers
with a Large Language Model (LLM) prompted to represent common student errors.
We describe how we use this framework to collect MathDial, a dataset of 3k
one-to-one teacher-student tutoring dialogues grounded in multi-step math
reasoning problems. While models like GPT-3 are good problem solvers, they fail
at tutoring because they generate factually incorrect feedback or are prone to
revealing solutions to students too early. To overcome this, we let teachers
provide learning opportunities to students by guiding them using various
scaffolding questions according to a taxonomy of teacher moves. We demonstrate
MathDial and its extensive annotations can be used to finetune models to be
more effective tutors (and not just solvers). We confirm this by automatic and
human evaluation, notably in an interactive setting that measures the trade-off
between student solving success and telling solutions. The dataset is released
publicly.Comment: Jakub Macina, Nico Daheim, and Sankalan Pal Chowdhury contributed
equally to this work. Accepted at EMNLP2023 Findings. Code and dataset
available: https://github.com/eth-nlped/mathdia