138,453 research outputs found

    Computational Cognitive Models of Summarization Assessment Skills

    Get PDF
    This paper presents a general computational cognitive model of the way a summary is assessed by teachers. It is based on models of two subprocesses: determining the importance of sentences and guessing the cognitive rules that the student may have used. All models are based on Latent Semantic Analysis, a computational model of the representation of the meaning of words and sentences. Models' performances are compared with data from an experiment conducted with 278 middle school students. The general model was implemented in a learning environment designed for helping students to write summaries

    Contextual Effects on Metaphor Comprehension: Experiment and Simulation

    Get PDF
    This paper presents a computational model of referential metaphor comprehension. This model is designed on top of Latent Semantic Analysis (LSA), a model of the representation of word and text meanings. Compre­hending a referential metaphor consists in scanning the semantic neighbors of the metaphor in order to find words that are also semantically related to the context. The depth of that search is compared to the time it takes for humans to process a metaphor. In particular, we are interested in two independent variables : the nature of the reference (either a literal meaning or a figurative meaning) and the nature of the context (inductive or not inductive). We show that, for both humans and model, first, metaphors take longer to process than the literal meanings and second, an inductive context can shorten the processing time

    A Computational Model of Children's Semantic Memory

    Get PDF
    A computational model of children's semantic memory is built from the Latent Semantic Analysis (LSA) of a multisource child corpus. Three tests of the model are described, simulating a vocabulary test, an association test and a recall task. For each one, results from experiments with children are presented and compared to the model data. Adequacy is correct, which means that this simulation of children's semantic memory can be used to simulate a variety of children's cognitive processes

    Embedding-based Scientific Literature Discovery in a Text Editor Application

    Full text link
    Each claim in a research paper requires all relevant prior knowledge to be discovered, assimilated, and appropriately cited. However, despite the availability of powerful search engines and sophisticated text editing software, discovering relevant papers and integrating the knowledge into a manuscript remain complex tasks associated with high cognitive load. To define comprehensive search queries requires strong motivation from authors, irrespective of their familiarity with the research field. Moreover, switching between independent applications for literature discovery, bibliography management, reading papers, and writing text burdens authors further and interrupts their creative process. Here, we present a web application that combines text editing and literature discovery in an interactive user interface. The application is equipped with a search engine that couples Boolean keyword filtering with nearest neighbor search over text embeddings, providing a discovery experience tuned to an author's manuscript and his interests. Our application aims to take a step towards more enjoyable and effortless academic writing. The demo of the application (https://SciEditorDemo2020.herokuapp.com/) and a short video tutorial (https://youtu.be/pkdVU60IcRc) are available online

    Learning strategies in interpreting text: From comprehension to illustration

    Get PDF
    Learning strategies can be described as behaviours and thoughts a learner engages in during learning that are aimed at gaining knowledge. Learners are, to use Mayer’s (1996) constructivist definition, ‘sense makers’. We can therefore position this to mean that, if learners are sense makers, then learning strategies are essentially cognitive processes used when learners are striving to make sense out of newly presented material. This paper intends to demonstrate that such thoughts and behaviours can be made explicit and that students can co-ordinate the basic cognitive processes of selecting, organising and integrating. I will discuss two learning strategies which were developed during three cycles of an action research enquiry with a group of illustration students. While each cycle had its own particular structure and aims, the main task, that of illustrating a passage of expository text into an illustration was a constant factor. The first learning strategy involved assisting students develop ‘macropropositions’—personal understandings of the gist or essence of a text (Louwerse and Graesser, 2006; Armbruster, Anderson and Ostertag, 1987; Van Dijk & Kintsch, 1983). The second learning strategy used a form of induction categorised as analogical reasoning (Holyoak, 2005; Sloman and Lagnado, 2005). Both strategies were combined to illustrate the expository text extract. The data suggests that design students benefit from a structured approach to learning, where thinking processes and approaches can be identified and accessible for other learning situations. The research methodology is based on semi-structured interviews, questionnaires, developmental design (including student notes) and final design output. All student names used are pseudonyms. The text extract from ‘Through the Magic Door’ an essay Sir Arthur Conan Doyle, (1907) has been included as it provides context to analysis outcomes, student comments and design outputs. Keywords: Action Research; Illustration; Macrostructures; Analogical Reasoning; Learning Strategies</p

    Human assessments of document similarity

    Get PDF
    Two studies are reported that examined the reliability of human assessments of document similarity and the association between human ratings and the results of n-gram automatic text analysis (ATA). Human interassessor reliability (IAR) was moderate to poor. However, correlations between average human ratings and n-gram solutions were strong. The average correlation between ATA and individual human solutions was greater than IAR. N-gram length influenced the strength of association, but optimum string length depended on the nature of the text (technical vs. nontechnical). We conclude that the methodology applied in previous studies may have led to overoptimistic views on human reliability, but that an optimal n-gram solution can provide a good approximation of the average human assessment of document similarity, a result that has important implications for future development of document visualization systems
    corecore