9 research outputs found

    Retrieval-augmented Generation to Improve Math Question-Answering: Trade-offs Between Groundedness and Human Preference

    Full text link
    For middle-school math students, interactive question-answering (QA) with tutors is an effective way to learn. The flexibility and emergent capabilities of generative large language models (LLMs) has led to a surge of interest in automating portions of the tutoring process - including interactive QA to support conceptual discussion of mathematical concepts. However, LLM responses to math questions can be incorrect or mismatched to the educational context - such as being misaligned with a school's curriculum. One potential solution is retrieval-augmented generation (RAG), which involves incorporating a vetted external knowledge source in the LLM prompt to increase response quality. In this paper, we designed prompts that retrieve and use content from a high-quality open-source math textbook to generate responses to real student questions. We evaluate the efficacy of this RAG system for middle-school algebra and geometry QA by administering a multi-condition survey, finding that humans prefer responses generated using RAG, but not when responses are too grounded in the textbook content. We argue that while RAG is able to improve response quality, designers of math QA systems must consider trade-offs between generating responses preferred by students and responses closely matched to specific educational resources.Comment: 6 pages, presented at NeurIPS'23 Workshop on Generative AI for Education (GAIED

    Optimizing Residual Plots for Likert Data

    No full text
    Likert-type questions are widely used in survey in social science and produce discrete and repeated data. When plotting residuals from a linear model whose dependent variable is measured by Likert-type question, researchers might have problem observing the plot which is always with parallel lines. Adding some disturbance to the dependent variable before plotting can optimize the plot and solve this problem

    A model of perception of ambient learning environment, perception of online learning environment and learning environment satisfaction: A survey instrument

    No full text
    This research paper introduced a survey instrument for evaluating learning environment satisfaction from home ambient environment experience and online learning environment experience. The survey questions examined a range of ambient environment factors together with scales extracted from online learning environment survey (OLLES) (Clayton, 2007) to systematically measure learning environment perception. The questionnaire was then tested in a field study. Exploratory and confirmatory factor analyses revealed a six-factor model of students’ satisfaction with learning environment including ambient environment, student-student interaction, student-interface relationships, student-tutor relationships, student-content relationships, and student reflection activities. Structural equation modeling explained relationship among perception of ambient learning environment, perception of online learning environment and learning environment satisfaction. The development and field test of this survey tool enable evaluations of online learning environment within consideration of ambient environment, as well as support learning environment design and management
    corecore