31,057 research outputs found

    THE "POWER" OF TEXT PRODUCTION ACTIVITY IN COLLABORATIVE MODELING : NINE RECOMMENDATIONS TO MAKE A COMPUTER SUPPORTED SITUATION WORK

    Get PDF
    Language is not a direct translation of a speaker’s or writer’s knowledge or intentions. Various complex processes and strategies are involved in serving the needs of the audience: planning the message, describing some features of a model and not others, organizing an argument, adapting to the knowledge of the reader, meeting linguistic constraints, etc. As a consequence, when communicating about a model, or about knowledge, there is a complex interaction between knowledge and language. In this contribution, we address the question of the role of language in modeling, in the specific case of collaboration over a distance, via electronic exchange of written textual information. What are the problems/dimensions a language user has to deal with when communicating a (mental) model? What is the relationship between the nature of the knowledge to be communicated and linguistic production? What is the relationship between representations and produced text? In what sense can interactive learning systems serve as mediators or as obstacles to these processes

    Transcribing and annotating spoken language with EXMARaLDA

    Get PDF
    This paper describes EXMARaLDA, an XML-based framework for the construction, dissemination and analysis of corpora of spoken language transcriptions. Departing from a prototypical example of a “partitur” (musical score) transcription, the EXMARaLDA “single timeline, multiple tiers” data model and format is presented alongside with the EXMARaLDA Partitur-Editor, a tool for inputting and visualizing such data. This is followed by a discussion of the interaction of EXMARaLDA with other frameworks and tools that work with similar data models. Finally, this paper presents an extension of the “single timeline, multiple tiers” data model and describes its application within the EXMARaLDA system

    Towards automated knowledge-based mapping between individual conceptualisations to empower personalisation of Geospatial Semantic Web

    No full text
    Geospatial domain is characterised by vagueness, especially in the semantic disambiguation of the concepts in the domain, which makes defining universally accepted geo- ontology an onerous task. This is compounded by the lack of appropriate methods and techniques where the individual semantic conceptualisations can be captured and compared to each other. With multiple user conceptualisations, efforts towards a reliable Geospatial Semantic Web, therefore, require personalisation where user diversity can be incorporated. The work presented in this paper is part of our ongoing research on applying commonsense reasoning to elicit and maintain models that represent users' conceptualisations. Such user models will enable taking into account the users' perspective of the real world and will empower personalisation algorithms for the Semantic Web. Intelligent information processing over the Semantic Web can be achieved if different conceptualisations can be integrated in a semantic environment and mismatches between different conceptualisations can be outlined. In this paper, a formal approach for detecting mismatches between a user's and an expert's conceptual model is outlined. The formalisation is used as the basis to develop algorithms to compare models defined in OWL. The algorithms are illustrated in a geographical domain using concepts from the SPACE ontology developed as part of the SWEET suite of ontologies for the Semantic Web by NASA, and are evaluated by comparing test cases of possible user misconceptions
    • 

    corecore