2 research outputs found

    A student-facing dashboard for supporting sensemaking about the brainstorm process at a multi-surface space

    Full text link
    © 2017 Association for Computing Machinery. All rights reserved. We developed a student-facing dashboard tuned to support posthoc sensemaking in terms of participation and group effects in the context of collocated brainstorming. Grounding on foundations of small-group collaboration, open learner modelling and brainstorming at large interactive displays, we designed a set of models from behavioural data that can be visually presented to students. We validated the effectiveness of our dashboard in provoking group reflection by addressing two questions: (1) What do group members gain from studying measures of egalitarian contribution? and (2) What do group members gain from modelling how they sparked ideas off each other? We report on outcomes from a study with higher education students performing brainstorming. We present evidence from i) descriptive quantitative usage patterns; and ii) qualitative experiential descriptions reported by the students. We conclude the paper with a discussion that can be useful for the community in the design of collective reflection systems

    The Development and Validation of the Technology-Supported Reflection Inventory

    Get PDF
    Reflection is an often addressed design goal in Human-Computer Interaction (HCI) research. An increasing number of artefacts for reflection have been developed in recent years. However, evaluating if and how an interactive technology helps a user reflect is still complex. This makes it difficult to compare artefacts (or prototypes) for reflection, impeding future design efforts. To address this issue, we developed the \emph{Technology-Supported Reflection Inventory} (TSRI), which is a scale that evaluates how effectively a system supports reflection. We first created a list of possible scale items based on past work in defining reflection. The items were then reviewed by experts. Next, we performed exploratory factor analysis to reduce the scale to its final length of nine items. Subsequently, we confirmed test-retest validity of our instrument, as well as its construct validity. The TSRI enables researchers and practitioners to compare prototypes designed to support reflection.Comment: CHI Conference on Human Factors in Computing Systems (CHI '21), May 8--13, 2021, Yokohama, Japa
    corecore