4,061 research outputs found

    Gesturing during mental problem solving reduces eye movements, especially for individuals with lower visual working memory capacity

    Get PDF
    Non-communicative hand gestures have been found to benefit problem-solving performance. These gestures seem to compensate for limited internal cognitive capacities, such as visual working memory capacity. Yet, it is not clear how gestures might perform this cognitive function. One hypothesis is that gesturing is a means to spatially index mental simulations, thereby reducing the need for visually projecting the mental simulation onto the visual presentation of the task. If that hypothesis is correct, less eye movements should be made when participants gesture during problem solvin

    Turn It This Way: Remote Gesturing in Video-Mediated Communication

    Get PDF
    Collaborative physical tasks are working tasks characterised by workers 'in-the-field' who manipulate task artefacts under the guidance of a remote expert. Examples of such interactions include paramedics requiring field-surgery consults from hospital surgeons, soldiers requiring support from distant bomb-disposal experts, technicians inspecting and repairing machinery under the guidance of a chief engineer or scientists examining artefacts with distributed colleagues. This thesis considers the design of technology to support such forms of distributed working. Early research in video-mediated communication (VMC) which sought to support such interactions presumed video links between remote spaces would improve collaboration. The results of these studies however, demonstrated that in such tasks audio-video links alone were unlikely to improve performance beyond that achievable by simpler audio-only links. In explanation of these observations a reading of studies of situated collaborative working practices suggests that to support distributed object-focussed interactions it is beneficial to not only provide visual access to remote spaces but also to present within the task-space the gestural actions of remote collaborators. Remote Gestural Simulacra are advanced video-mediated communication tools that enable remote collaborators to both see and observably point at and gesture around and towards shared task artefacts located at another site. Technologies developed to support such activities have been critiqued; their design often fractures the interaction between the collaborating parties, restricting access to aspects of communication which are commonly used in co-present situations to coordinate interaction and ground understanding. This thesis specifically explores the design of remote gesture tools, seeking to understand how remote representations of gesture can be used during collaborative physical tasks. In a series of lab-based studies, the utility of remote gesturing is investigated, both qualitatively, examining its collaborative function and quantitatively exploring its impact on both facets of task performance and collaborative language. The thesis also discusses how the configuration of remote gesture tools impacts on their usability, empirically comparing various gesture tool designs. The thesis constructs and examines an argument that remote gesture tools should be designed from a 'mixed ecologies' perspective (theoretically alleviating the problems engendered by 'fractured ecologies' in which collaborating partners are given access to the most salient and relevant features of communicative action that are utilised in face-to-face interaction, namely mutual and reciprocal awareness of commonly understood object-focussed actions (hand-based gestures) and mutual and reciprocal awareness of task-space perspectives. The thesis demonstrates experimental support for this position and concludes by presenting discussion of how the findings generated from the thesis research can be used to guide the design of future iterations of remote gesture tools, and presents directions for areas of further research

    On Inter-referential Awareness in Collaborative Augmented Reality

    Get PDF
    For successful collaboration to occur, a workspace must support inter-referential awareness - or the ability for one participant to refer to a set of artifacts in the environment, and for that reference to be correctly interpreted by others. While referring to objects in our everyday environment is a straight-forward task, the non-tangible nature of digital artifacts presents us with new interaction challenges. Augmented reality (AR) is inextricably linked to the physical world, and it is natural to believe that the re-integration of physical artifacts into the workspace makes referencing tasks easier; however, we find that these environments combine the referencing challenges from several computing disciplines, which compound across scenarios. This dissertation presents our studies of this form of awareness in collaborative AR environments. It stems from our research in developing mixed reality environments for molecular modeling, where we explored spatial and multi-modal referencing techniques. To encapsulate the myriad of factors found in collaborative AR, we present a generic, theoretical framework and apply it to analyze this domain. Because referencing is a very human-centric activity, we present the results of an exploratory study which examines the behaviors of participants and how they generate references to physical and virtual content in co-located and remote scenarios; we found that participants refer to content using physical and virtual techniques, and that shared video is highly effective in disambiguating references in remote environments. By implementing user feedback from this study, a follow-up study explores how the environment can passively support referencing, where we discovered the role that virtual referencing plays during collaboration. A third study was conducted in order to better understand the effectiveness of giving and interpreting references using a virtual pointer; the results suggest the need for participants to be parallel with the arrow vector (strengthening the argument for shared viewpoints), as well as the importance of shadows in non-stereoscopic environments. Our contributions include a framework for analyzing the domain of inter-referential awareness, the development of novel referencing techniques, the presentation and analysis of our findings from multiple user studies, and a set of guidelines to help designers support this form of awareness

    Meaningful Hand Gestures for Learning with Touch-based I.C.T.

    Get PDF
    The role of technology in educational contexts is becoming increasingly ubiquitous, with very few students and teachers able to engage in classroom learning activities without using some sort of Information Communication Technology (ICT). Touch-based computing devices in particular, such as tablets and smartphones, provide an intuitive interface where control and manipulation of content is possible using hand and finger gestures such as taps, swipes and pinches. Whilst these touch-based technologies are being increasingly adopted for classroom use, little is known about how the use of such gestures can support learning. The purpose of this study was to investigate how finger gestures used on a touch-based device could support learning

    Turn It This Way: Remote Gesturing in Video-Mediated Communication

    Get PDF
    Collaborative physical tasks are working tasks characterised by workers 'in-the-field' who manipulate task artefacts under the guidance of a remote expert. Examples of such interactions include paramedics requiring field-surgery consults from hospital surgeons, soldiers requiring support from distant bomb-disposal experts, technicians inspecting and repairing machinery under the guidance of a chief engineer or scientists examining artefacts with distributed colleagues. This thesis considers the design of technology to support such forms of distributed working. Early research in video-mediated communication (VMC) which sought to support such interactions presumed video links between remote spaces would improve collaboration. The results of these studies however, demonstrated that in such tasks audio-video links alone were unlikely to improve performance beyond that achievable by simpler audio-only links. In explanation of these observations a reading of studies of situated collaborative working practices suggests that to support distributed object-focussed interactions it is beneficial to not only provide visual access to remote spaces but also to present within the task-space the gestural actions of remote collaborators. Remote Gestural Simulacra are advanced video-mediated communication tools that enable remote collaborators to both see and observably point at and gesture around and towards shared task artefacts located at another site. Technologies developed to support such activities have been critiqued; their design often fractures the interaction between the collaborating parties, restricting access to aspects of communication which are commonly used in co-present situations to coordinate interaction and ground understanding. This thesis specifically explores the design of remote gesture tools, seeking to understand how remote representations of gesture can be used during collaborative physical tasks. In a series of lab-based studies, the utility of remote gesturing is investigated, both qualitatively, examining its collaborative function and quantitatively exploring its impact on both facets of task performance and collaborative language. The thesis also discusses how the configuration of remote gesture tools impacts on their usability, empirically comparing various gesture tool designs. The thesis constructs and examines an argument that remote gesture tools should be designed from a 'mixed ecologies' perspective (theoretically alleviating the problems engendered by 'fractured ecologies' in which collaborating partners are given access to the most salient and relevant features of communicative action that are utilised in face-to-face interaction, namely mutual and reciprocal awareness of commonly understood object-focussed actions (hand-based gestures) and mutual and reciprocal awareness of task-space perspectives. The thesis demonstrates experimental support for this position and concludes by presenting discussion of how the findings generated from the thesis research can be used to guide the design of future iterations of remote gesture tools, and presents directions for areas of further research

    Online Instructors’ Gestures For Euclidean Transformations

    Get PDF
    The purpose of this case study was to explore the nature of instructors’ gestures as they teach Euclidean transformations in a synchronous online setting, and to investigate how, if at all, the synchronous online setting impacted the instructors’ intentionality and usage of gestures. The participants in this case study were two collegiate instructors teaching Euclidean transformations to pre-service elementary teachers. The synchronous online instructors’ gestures were captured in detail via two video cameras; one through the screen-capture software built into the online conference platform used to conduct the class and another separate auxiliary camera to capture the gestures that the instructors made outside the view of the screen-capture software. The perceived intentionality of the instructors’ gestures was documented via an hour-long videorecorded interview after teaching the Euclidean transformation unit. The findings indicated that synchronous online instructors make representational gestures and pointing gestures while teaching Euclidean transformations. Specifically, that representational gestures served as a second form of communication for the students while pointing gestures grounded synchronous online instructors’ responses to student contributions within classroom materials. The findings further indicated the combination of the synchronous online instructors’ gestures and language provided a more cohesive picture of the Euclidean transformation as opposed to the gestures alone. Additionally, the findings specified that synchronous online instructors believe the purpose of their gestures was for the benefit of their students as well as for themselves. Finally, the findings highlighted a connection between instructors who previously thought about the potential impact of gestures in the mathematics classroom and intentionally producing gestures. Specifically, critically thinking about gestures within the mathematics classroom before teaching appeared to correspond with more intentional gestures while teaching. Based on these findings, there were three recommendations. The first recommendation was for continued education on gesture as an avenue to communicate mathematical ideas. A professional development workshop may assist collegiate instructors to produce more intentional and mathematically precise gestures. The last two recommendations were for synchronous online instructors to utilize technology that affords students the opportunity to view all of their gestures and for the instructors to explicitly instruct their students to pay attention to their gestures. Knowing that the students can view all of their movements and are specifically looking for gestures might prompt the instructors to gesture with more intentionality and precision

    Multimodal feedback for mid-air gestures when driving

    Get PDF
    Mid-air gestures in cars are being used by an increasing number of drivers on the road. Us-ability concerns mean good feedback is important, but a balance needs to be found between supporting interaction and reducing distraction in an already demanding environment. Visual feedback is most commonly used, but takes visual attention away from driving. This thesis investigates novel non-visual alternatives to support the driver during mid-air gesture interaction: Cutaneous Push, Peripheral Lights, and Ultrasound feedback. These modalities lack the expressive capabilities of high resolution screens, but are intended to allow drivers to focus on the driving task. A new form of haptic feedback — Cutaneous Push — was defined. Six solenoids were embedded along the rim of the steering wheel, creating three bumps under each palm. Studies 1, 2, and 3 investigated the efficacy of novel static and dynamic Cutaneous Push patterns, and their impact on driving performance. In simulated driving studies, the cutaneous patterns were tested. The results showed pattern identification rates of up to 81.3% for static patterns and 73.5% for dynamic patterns and 100% recognition of directional cues. Cutaneous Push notifications did not impact driving behaviour nor workload and showed very high user acceptance. Cutaneous Push patterns have the potential to make driving safer by providing non-visual and instantaneous messages, for example to indicate an approaching cyclist or obstacle. Studies 4 & 5 looked at novel uni- and bimodal feedback combinations of Visual, Auditory, Cutaneous Push, and Peripheral Lights for mid-air gestures and found that non-visual feedback modalities, especially when combined bimodally, offered just as much support for interaction without negatively affecting driving performance, visual attention and cognitive demand. These results provide compelling support for using non-visual feedback from in-car systems, supporting input whilst letting drivers focus on driving.Studies 6 & 7 investigated the above bimodal combinations as well as uni- and bimodal Ultrasound feedback during the Lane Change Task to assess the impact of gesturing and feedback modality on car control during more challenging driving. The results of study Seven suggests that Visual and Ultrasound feedback are not appropriate for in-car usage,unless combined multimodally. If Ultrasound is used unimodally it is more useful in a binary scenario.Findings from Studies 5, 6, and 7 suggest that multimodal feedback significantly reduces eyes-off-the-road time compared to Visual feedback without compromising driving performance or perceived user workload, thus it can potentially reduce crash risks. Novel design recommendations for providing feedback during mid-air gesture interaction in cars are provided, informed by the experiment findings

    Exploring The Impact Of Configuration And Mode Of Input On Group Dynamics In Computing

    Get PDF
    Objectives: Large displays and new technologies for interacting with computers offer a rich area for the development of new tools to facilitate collaborative concept mapping activities. In this thesis, WiiConcept is described as a tool designed to allow the use of multiple WiiRemotes for the collaborative creation of concept maps, with and without gestures. Subsequent investigation of participants' use of the system considers the effect of single and multiple input streams when using the software with and without gestures and the impact upon group concept mapping process outcomes and interactions when using a large display. Methods: Data is presented from an exploratory study of twenty two students who have used the tool. Half of the pairs used two WiiRemotes, while the remainder used one WiiRemote. All pairs created one map without gestures and one map with gestures. Data about their maps, interactions and responses to the tool were collected. Results: Analysis of coded transcripts indicates that one-controller afforded higher levels of interaction, with the use of gestures also increasing the number of interactions seen. Additionally, the result indicated that there were significantly more interactions of the 'shows solidarity', 'gives orientation', and 'gives opinion' categories (defined by the Bales' interaction processes assessment), when using one-controller as opposed to two. Furthermore, there were more interactions for the 'shows solidarity', 'tension release', 'gives orientation' and 'shows tension' categories when using gestures as opposed to the non-use of gestures. Additionally, there were no significant differences in the perceived dominance of individuals, as measured on the social dominance scales, for the amount of interaction displayed, however, there was a significant main effect of group conversational control score on the 'gives orientation' construct, with a higher number of interactions for low, mixed and high scores of this type when dyads had one-controller as opposed to two-controllers. There was also a significant interaction effect of group conversational control score on the 'shows solidarity' construct with a higher number of interactions for all scores of this type when dyads had one-controller as opposed to two-controllers. The results also indicate that for the WiiConcept there was no difference between number of controllers in the detail in the maps, and that all users found the tool to be useful for the collaborative creation of concept maps. At the same time, engaging in disagreement was related to the amount of nodes created with disagreement leading to more nodes being created. Conclusions: Use of one-controller afforded higher levels of interaction, with gestures also increasing the number of interactions seen. If a particular type of interaction is associated with more nodes, there might also be some argument for only using one-controller with gestures enabled to promote cognitive conflict within groups. All participants responded that the tool was relatively easy to use and engaging, which suggests that this tool could be integrated into collaborative concept mapping activities, allowing for greater collaborative knowledge building and sharing of knowledge, due to the increased levels of interaction for one-controller. As research has shown concept mapping can be useful for promoting the understanding of complex ideas, therefore the adoption of the WiiConcept tool as part of a small group learning activity may lead to deeper levels of understanding. Additionally, the use of gestures suggests that this mode of input does not affect the amount of words, nodes, and edges created in a concept map. Further research, over a longer period of time, may see improvement with this form of interaction, with increased mastery of gestural movement leading to greater detail of conceptual mapping
    corecore