278 research outputs found

    Visualisation of interaction footprints for engagement and motivation in online communities:results of first interviews

    Get PDF
    Glahn, C., Specht, M., & Koper, R. (2008). Visualisation of interaction footprints for engagement and motivation in online communities – results of first interviews. In M. Kalz, R. Koper, V. Hornung-Prähauser & M. Luckmann (Eds.), Proceedings of the 1st Workshop on Technology Support for Self-Organized Learners (TSSOL08) (pp. 29-43). June, 2-3, 2008, Salzburg, Austria. Available at http://sunsite.informatik.rwth-aachen.de/Publications/CEUR-WS/Vol-349/glahn.pdfContextualised and ubiquitous learning are relatively new research areas that combine the latest developments in ubiquitous and context aware computing with educational approaches in order to provide structure to more situated and context aware learning. The majority of activities in contextualised and ubiquitous learning focus on mobile scenarios, in order to identify the relation between educational paradigms and new classes of mobile applications and devices. However, the meaning of context aware learner support is not limited to mobile learning scenarios by default. The educational paradigms of situated learning and communities of practice highlight these needs for informal learning and for workplace learning. In this paper we analyse learner participation as a contextual dimension of adapting graphical indicators for engaging and motivating learners in participating and contributing to an open community of practice. For this purpose we analyse six interviews with selected participants of that community. We compared the reactions of the learners who were provided different indicators during their interactions with an online system. The results of these interviews illustrate the impact of small variations in the aggregation and visualisation of interaction footprints on the engagement of learners at different contribution levels.The work on this publication has been sponsored by the TENCompetence Integrated Project that is funded by the European Commission's 6th Framework Programme, priority IST/Technology Enhanced Learning. Contract 027087 [http://www.tencompetence.org

    Pervasive Personal Information Spaces

    Get PDF
    Each user’s electronic information-interaction uniquely matches their information behaviour, activities and work context. In the ubiquitous computing environment, this information-interaction and the underlying personal information is distributed across multiple personal devices. This thesis investigates the idea of Pervasive Personal Information Spaces for improving ubiquitous personal information-interaction. Pervasive Personal Information Spaces integrate information distributed across multiple personal devices to support anytime-anywhere access to an individual’s information. This information is then visualised through context-based, flexible views that are personalised through user activities, diverse annotations and spontaneous information associations. The Spaces model embodies the characteristics of Pervasive Personal Information Spaces, which emphasise integration of the user’s information space, automation and communication, and flexible views. The model forms the basis for InfoMesh, an example implementation developed for desktops, laptops and PDAs. The design of the system was supported by a tool developed during the research called activity snaps that captures realistic user activity information for aiding the design and evaluation of interactive systems. User evaluation of InfoMesh elicited a positive response from participants for the ideas underlying Pervasive Personal Information Spaces, especially for carrying out work naturally and visualising, interpreting and retrieving information according to personalised contexts, associations and annotations. The user studies supported the research hypothesis, revealing that context-based flexible views may indeed provide better contextual, ubiquitous access and visualisation of information than current-day systems

    Touch-Based Ontology Browsing on Tablets and Surfaces

    Get PDF
    Semantic technologies and Linked Data are increasingly adopted as core application modules, in many knowledge domains and involving various stakeholders: ontology engineers, software architects, doctors, employees, etc. Such a diffusion calls for better access to models and data, which should be direct, mobile, visual and time effective. While a relevant core of research efforts investigated the problem of ontology visualization, discovering different paradigms, layouts, and interaction modalities, a few approaches target mobile devices such as tablets and smartphones. Touch interaction, indeed, has the potential of dramatically improving usability of Linked Data and of semantic-based solutions in real-world applications and mash-ups, by enabling direct and tactile interactions with involved knowledge objects. In this paper, we move a step towards touch-based, mobile interfaces for semantic models by presenting an ontology browsing platform for Android devices. We exploit state of the art touch-based interaction paradigms, e.g., pie menus, pinch-to-zoom, etc., to empower effective ontology browsing. Our research mainly focuses on interactions, yet providing support to different visualization approaches thanks to a clear decoupling between model-level operation and visual representations. Presented results include the design and implementation of a working prototype application, as well as a first validation involving habitual users of semantic technologies. Results show a low learning curve and positive reactions to the proposed paradigms, which are perceived as both innovative and useful
    • …
    corecore