31 research outputs found
Recommended from our members
Assessing student contributions in a simulated human tutor with Latent Semantic Analysis
Recommended from our members
Inferring the Meaning of Verbs from Context
This paper describes a cross-disciplinary extension of previous work on infeiring the meanings of unknown verbs from context. In earlier work, a computational model was developed to incrementally infer meanings while processing texts in an information extraction task setting. In order to explore the space of possible predictors that the system could use to infer verb meanings, we performed a statistical analysis of the corpus that had been used to test the computational system. There were various syntactic and semantic features of the verbs that were significantly diagnostic in detemiining verb meaning. We also evaluated human performance at inferring the verb in the same set of sentences. The overall number of correct predictions for humans was quite similar to that of the computational system, but humans had higher precision scores. The paper concludes with a discussion of the implications of these statistical and experimental findings for future computational work
Recommended from our members
Contextual Representation of Abstract Nouns: A Neural Network Approach
This paper explores the use of an artificial neural network to investigate the mental representation of abstract noun meanings. Unlike concrete nouns, abstract nouns refer to entities that cannot be pointed to. Cues to their meaning must therefore be in their context of use. It has frequently been shown that the meaning of a word varies with its contexts of use. It is more difficult, however, to identify which elements of context are relevant to a word's meaning. The present study demonstrates that a connectionist network can be used to examine this problem. A feedforward network learned to distinguish among seven abstract nouns based on characteristics of their verbal contexts in a corpus of randomly selected sentences. The results suggest that, for our sample, the contextual representation of abstract nouns is in principle sufficient to identify and distinguish abstract nouns and thus meets the functional requirements of concept representation
Using Latent Semantic Analysis to Assess Reader Strategies
We tested a computer-based procedure for assessing reader strategies that was based on verbal protocols that utilized latent semantic analysis (LSA). Students were given self-explanation-reading training (SERT), which teaches strategies that facilitate self-explanation during reading, such as elaboration based on world knowledge and bridging between text sentences. During a computerized version of SERT practice, students read texts and typed self-explanations into a computer after each sentence. The use of SERT strategies during this practice was assessed by determining the extent to which students used the information in the current sentence versus the prior text or world knowledge in their self-explanations. This assessment was made on the basis of human judgments and LSA. Both human judgments and LSA were remarkably similar and indicated that students who were not complying with SERT tended to paraphrase the text sentences, whereas students who were compliant with SERT tended to explain the sentences in terms of what they knew about the world and of information provided in the prior text context. The similarity between human judgments and LSA indicates that LSA will be useful in accounting for reading strategies in a Web-based version of SERT
Improving an intelligent tutor's comprehension of students with Latent Semantic Analysis
AutoTutor is an intelligent tutor that interacts smoothly with the student using natural language dialogue. This type of interaction allows us to extend the domains of tutoring. We are no longer restricted to areas like mathematics and science where interaction with the student can be limited to typing in numbers or selecting possibilities with a button. Others have tried to implement tutors that interact via natural language in the past, but because of the di#culty of understanding language in a wide domain, their best results came when they limited student answers to single words. Our research directly addresses the problem of understanding what the student naturally says. One solution to this problem that has recently emerged is Latent Semantic Analysis (LSA). LSA is a statistical, corpus-based natural language understanding technique that supports similarity comparisons between texts. The success of this technique has been described elsewhere [3, 5, for example]. In thi..
Approximate Natural Language Understanding for an Intelligent Tutor
Intelligent tutoring systems (ITS's) have a rich history of helping students in certain scientific domains, like geometry, chemistry, and programming. These domains are ideal for ITS's, because they can be easily represented and because the type of interaction between the student and the tutor can be limited to entering a few simple numbers, symbols, or keywords. Students need help in other areas, but without the ability to robustly understand a student's input, ITS's in these areas are inherently limited. Recently a technique called Latent Semantic Analysis has offered a corpus-based approach to understanding textual input which is not sensitive to errors in spelling or grammar -- in fact, it pays no attention to word order at all. We are using this technique as part of an ITS system which promotes learning using natural human-like dialogue between the human and the student. This paper describes the tutoring system and Latent Semantic Analysis, and how they operate together. Then it de..