2 research outputs found

    Semantic Matching Evaluation: Optimizing Models for Agreement Between Humans and AutoTutor

    Get PDF
    The goal of this thesis is to evaluate the answers that students give to questions asked by an intelligent tutoring system (ITS) on electronics, called ElectronixTutor. One learning resource of ElectronixTutor is AutoTutor, an instructional module that helps students learn by holding a conversation in natural language. The semantic relatedness between a studentā€™s verbal input and an ideal answer is a salient feature for assessing performance of the student in AutoTutor. Inaccurate assessment of the verbal contributions will create problems in AutoTutorā€™s adaptation to the student. Therefore, this thesis evaluated the quality of semantic matches between student input and the expected responses in AutoTutor. AutoTutor evaluates semantic matches with a combination of Latent Semantic Analysis (LSA) and Regular Expressions (RegEx) when assessing student verbal input. Analyzing response-expectation pairings and comparing computer scoring with judge ratings allowed us to look at the agreement between humans and computers overall as well as on an item basis. Aggregate analyses on these data allowed us to observe the overall relative agreement between subject-matter experts and the AutoTutor system. Item analyses allowed us to observe variation between items and interactions between human and computer assessment conditions on various threshold levels (i.e. stringent, intermediate, lenient). As expected, RegEx and LSA showed a positive relationship Ļ (5202) = .471. Additionally, F1 measure agreement (the harmonic mean of precision and recall) between the computer and humans was similar to agreement between humans. In some cases, computer-human F1 measure agreement compared to between-humans was as close as F1 = .006

    TOWARDS BUILDING INTELLIGENT COLLABORATIVE PROBLEM SOLVING SYSTEMS

    Get PDF
    Historically, Collaborative Problem Solving (CPS) systems were more focused on Human Computer Interaction (HCI) issues, such as providing good experience of communication among the participants. Whereas, Intelligent Tutoring Systems (ITS) focus both on HCI issues as well as leveraging Artificial Intelligence (AI) techniques in their intelligent agents. This dissertation seeks to minimize the gap between CPS systems and ITS by adopting the methods used in ITS researches. To move towards this goal, we focus on analyzing interactions with textual inputs in online learning systems such as DeepTutor and Virtual Internships (VI) to understand their semantics and underlying intents. In order to address the problem of assessing the student generated short text, this research explores firstly data driven machine learning models coupled with expert generated as well as general text analysis features. Secondly it explores method to utilize knowledge graph embedding for assessing student answer in ITS. Finally, it also explores a method using only standard reference examples generated by human teacher. Such method is useful when a new system has been deployed and no student data were available.To handle negation in tutorial dialogue, this research explored a Long Short Term Memory (LSTM) based method. The advantage of this method is that it requires no human engineered features and performs comparably well with other models using human engineered features.Another important analysis done in this research is to find speech acts in conversation utterances of multiple players in VI. Among various models, a noise label trained neural network model performed better in categorizing the speech acts of the utterances.The learners\u27 professional skill development in VI is characterized by the distribution of SKIVE elements, the components of epistemic frames. Inferring the population distribution of these elements could help to assess the learners\u27 skill development. This research sought a Markov method to infer the population distribution of SKIVE elements, namely the stationary distribution of the elements.While studying various aspects of interactions in our targeted learning systems, we motivate our research to replace the human mentor or tutor with intelligent agent. Introducing intelligent agent in place of human helps to reduce the cost as well as scale up the system
    corecore