92 research outputs found

    This is the size of one meter:Children’s bodily-material collaboration

    Get PDF

    “This is the size of one meter”:Children’s bodily-material collaboration and understanding of scale around touchscreens

    Get PDF

    “I cannot explain why I like this shape better than that shape”: Intercorporeality in collaborative learning

    Get PDF

    Designing for Cross-Device Interactions

    Get PDF
    Driven by technological advancements, we now own and operate an ever-growing number of digital devices, leading to an increased amount of digital data we produce, use, and maintain. However, while there is a substantial increase in computing power and availability of devices and data, many tasks we conduct with our devices are not well connected across multiple devices. We conduct our tasks sequentially instead of in parallel, while collaborative work across multiple devices is cumbersome to set up or simply not possible. To address these limitations, this thesis is concerned with cross-device computing. In particular it aims to conceptualise, prototype, and study interactions in cross-device computing. This thesis contributes to the field of Human-Computer Interaction (HCI)—and more specifically to the area of cross-device computing—in three ways: first, this work conceptualises previous work through a taxonomy of cross-device computing resulting in an in-depth understanding of the field, that identifies underexplored research areas, enabling the transfer of key insights into the design of interaction techniques. Second, three case studies were conducted that show how cross-device interactions can support curation work as well as augment users’ existing devices for individual and collaborative work. These case studies incorporate novel interaction techniques for supporting cross-device work. Third, through studying cross-device interactions and group collaboration, this thesis provides insights into how researchers can understand and evaluate multi- and cross-device interactions for individual and collaborative work. We provide a visualization and querying tool that facilitates interaction analysis of spatial measures and video recordings to facilitate such evaluations of cross-device work. Overall, the work in this thesis advances the field of cross-device computing with its taxonomy guiding research directions, novel interaction techniques and case studies demonstrating cross-device interactions for curation, and insights into and tools for effective evaluation of cross-device systems

    Supporting Reflection and Classroom Orchestration with Tangible Tabletops

    Get PDF
    Tangible tabletop systems have been extensively proven to be able to enhance participation and engagement as well as enable many exciting activities, particularly in the education domain. However, it remains unclear as to whether students really benefit from using them for tasks that require a high level of reflection. Moreover, most existing tangible tabletops are designed as stand-alone systems or devices. Increasingly, this design assumption is no longer sufficient, especially in realistic learning settings. Due to the technological evolution in schools, multiple activities, resources, and constraints in the classroom ecosystem are now involved in the learning process. The way teachers manage technology-enhanced classrooms and the involved activities and constraints in real-time, also known as classroom orchestration, is a crucial aspect for the materialization of reflection and learning. This thesis aims to explore how educational tangible tabletop systems affect reflection, how reflection and orchestration are related, and how we can support reflection and orchestration to improve learning. It presents the design, implementation, and evaluations of three tangible tabletop systems – the DockLamp, the TinkerLamp, and the TinkerLamp 2.0 – in different learning contexts. Our experience with these systems, both inside and outside of the laboratory, results in an insightful understanding of the impacts of tangible tabletops on learning and the conditions for their effective use as well as deployment. These findings can be beneficial to the researchers and designers of learning environments using tangible tabletop and similar interfaces

    Discoverable Free Space Gesture Sets for Walk-Up-and-Use Interactions

    Get PDF
    abstract: Advances in technology are fueling a movement toward ubiquity for beyond-the-desktop systems. Novel interaction modalities, such as free space or full body gestures are becoming more common, as demonstrated by the rise of systems such as the Microsoft Kinect. However, much of the interaction design research for such systems is still focused on desktop and touch interactions. Current thinking in free-space gestures are limited in capability and imagination and most gesture studies have not attempted to identify gestures appropriate for public walk-up-and-use applications. A walk-up-and-use display must be discoverable, such that first-time users can use the system without any training, flexible, and not fatiguing, especially in the case of longer-term interactions. One mechanism for defining gesture sets for walk-up-and-use interactions is a participatory design method called gesture elicitation. This method has been used to identify several user-generated gesture sets and shown that user-generated sets are preferred by users over those defined by system designers. However, for these studies to be successfully implemented in walk-up-and-use applications, there is a need to understand which components of these gestures are semantically meaningful (i.e. do users distinguish been using their left and right hand, or are those semantically the same thing?). Thus, defining a standardized gesture vocabulary for coding, characterizing, and evaluating gestures is critical. This dissertation presents three gesture elicitation studies for walk-up-and-use displays that employ a novel gesture elicitation methodology, alongside a novel coding scheme for gesture elicitation data that focuses on features most important to users’ mental models. Generalizable design principles, based on the three studies, are then derived and presented (e.g. changes in speed are meaningful for scroll actions in walk up and use displays but not for paging or selection). The major contributions of this work are: (1) an elicitation methodology that aids users in overcoming biases from existing interaction modalities; (2) a better understanding of the gestural features that matter, e.g. that capture the intent of the gestures; and (3) generalizable design principles for walk-up-and-use public displays.Dissertation/ThesisDoctoral Dissertation Computer Science 201

    Mixed Reality Interfaces for Augmented Text and Speech

    Get PDF
    While technology plays a vital role in human communication, there still remain many significant challenges when using them in everyday life. Modern computing technologies, such as smartphones, offer convenient and swift access to information, facilitating tasks like reading documents or communicating with friends. However, these tools frequently lack adaptability, become distracting, consume excessive time, and impede interactions with people and contextual information. Furthermore, they often require numerous steps and significant time investment to gather pertinent information. We want to explore an efficient process of contextual information gathering for mixed reality (MR) interfaces that provide information directly in the user’s view. This approach allows for a seamless and flexible transition between language and subsequent contextual references, without disrupting the flow of communication. ’Augmented Language’ can be defined as the integration of language and communication with mixed reality to enhance, transform, or manipulate language-related aspects and various forms of linguistic augmentations (such as annotation/referencing, aiding social interactions, translation, localization, etc.). In this thesis, our broad objective is to explore mixed reality interfaces and their potential to enhance augmented language, particularly in the domains of speech and text. Our aim is to create interfaces that offer a more natural, generalizable, on-demand, and real-time experience of accessing contextually relevant information and providing adaptive interactions. To better address this broader objective, we systematically break it down to focus on two instances of augmented language. First, enhancing augmented conversation to support on-the-fly, co-located in-person conversations using embedded references. And second, enhancing digital and physical documents using MR to provide on-demand reading support in the form of different summarization techniques. To examine the effectiveness of these speech and text interfaces, we conducted two studies in which we asked the participants to evaluate our system prototype in different use cases. The exploratory usability study for the first exploration confirms that our system decreases distraction and friction in conversation compared to smartphone search while providing highly useful and relevant information. For the second project, we conducted an exploratory design workshop to identify categories of document enhancements. We later conducted a user study with a mixed-reality prototype to highlight five board themes to discuss the benefits of MR document enhancement

    Tangible user interfaces to support collaborative learning

    Get PDF

    Exploring The Impact Of Configuration And Mode Of Input On Group Dynamics In Computing

    Get PDF
    Objectives: Large displays and new technologies for interacting with computers offer a rich area for the development of new tools to facilitate collaborative concept mapping activities. In this thesis, WiiConcept is described as a tool designed to allow the use of multiple WiiRemotes for the collaborative creation of concept maps, with and without gestures. Subsequent investigation of participants' use of the system considers the effect of single and multiple input streams when using the software with and without gestures and the impact upon group concept mapping process outcomes and interactions when using a large display. Methods: Data is presented from an exploratory study of twenty two students who have used the tool. Half of the pairs used two WiiRemotes, while the remainder used one WiiRemote. All pairs created one map without gestures and one map with gestures. Data about their maps, interactions and responses to the tool were collected. Results: Analysis of coded transcripts indicates that one-controller afforded higher levels of interaction, with the use of gestures also increasing the number of interactions seen. Additionally, the result indicated that there were significantly more interactions of the 'shows solidarity', 'gives orientation', and 'gives opinion' categories (defined by the Bales' interaction processes assessment), when using one-controller as opposed to two. Furthermore, there were more interactions for the 'shows solidarity', 'tension release', 'gives orientation' and 'shows tension' categories when using gestures as opposed to the non-use of gestures. Additionally, there were no significant differences in the perceived dominance of individuals, as measured on the social dominance scales, for the amount of interaction displayed, however, there was a significant main effect of group conversational control score on the 'gives orientation' construct, with a higher number of interactions for low, mixed and high scores of this type when dyads had one-controller as opposed to two-controllers. There was also a significant interaction effect of group conversational control score on the 'shows solidarity' construct with a higher number of interactions for all scores of this type when dyads had one-controller as opposed to two-controllers. The results also indicate that for the WiiConcept there was no difference between number of controllers in the detail in the maps, and that all users found the tool to be useful for the collaborative creation of concept maps. At the same time, engaging in disagreement was related to the amount of nodes created with disagreement leading to more nodes being created. Conclusions: Use of one-controller afforded higher levels of interaction, with gestures also increasing the number of interactions seen. If a particular type of interaction is associated with more nodes, there might also be some argument for only using one-controller with gestures enabled to promote cognitive conflict within groups. All participants responded that the tool was relatively easy to use and engaging, which suggests that this tool could be integrated into collaborative concept mapping activities, allowing for greater collaborative knowledge building and sharing of knowledge, due to the increased levels of interaction for one-controller. As research has shown concept mapping can be useful for promoting the understanding of complex ideas, therefore the adoption of the WiiConcept tool as part of a small group learning activity may lead to deeper levels of understanding. Additionally, the use of gestures suggests that this mode of input does not affect the amount of words, nodes, and edges created in a concept map. Further research, over a longer period of time, may see improvement with this form of interaction, with increased mastery of gestural movement leading to greater detail of conceptual mapping
    corecore