3,893 research outputs found

    Subtle and Personal Workspace Requirements for Visual Search Tasks on Public Displays

    Get PDF
    This is the author’s version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in the Proceedings of the 2017 International Conference on Human Factors in Computing Systems on the ACM Digital Library http://dx.doi.org/10.1145/3025453.3025500We explore how users approach and define personal space on large, public displays. Our results show that users of public displays use one of two strategies for visual search tasks: minimizers create a small window and work up close to the display, and maximizers expand content to its full resolution and work at a distance. We show that these interaction styles match predicted `personal' and `subtle' interaction zones, characterize typical width and height requirements for these interactions, and show that these requirements are independent of the on-screen content's dimensions. Finally, we suggest practical guidelines for defining workspaces during personal and subtle interaction on large, public displays

    Behavioral patterns of individuals and groups during co-located collaboration on large, high-resolution displays

    Get PDF
    Collaboration among multiple users on large screens leads to complicated behavior patterns and group dynamics. To gain a deeper understanding of collaboration on vertical, large, high-resolution screens, this dissertation builds on previous research and gains novel insights through new observational studies. Among other things, the collected results reveal new patterns of collaborative coupling, suggest that territorial behavior is less critical than shown in previous research, and demonstrate that workspace awareness can also negatively affect the effectiveness of individual users

    Look together: using gaze for assisting co-located collaborative search

    Get PDF
    Gaze information provides indication of users focus which complements remote collaboration tasks, as distant users can see their partner’s focus. In this paper, we apply gaze for co-located collaboration, where users’ gaze locations are presented on the same display, to help collaboration between partners. We integrated various types of gaze indicators on the user interface of a collaborative search system, and we conducted two user studies to understand how gaze enhances coordination and communication between co-located users. Our results show that gaze indeed enhances co-located collaboration, but with a trade-off between visibility of gaze indicators and user distraction. Users acknowledged that seeing gaze indicators eases communication, because it let them be aware of their partner’s interests and attention. However, users can be reluctant to share their gaze information due to trust and privacy, as gaze potentially divulges their interests

    Co-Located Collaborative Visual Analytics around a Tabletop Display

    Full text link

    Designing to Support Workspace Awareness in Remote Collaboration using 2D Interactive Surfaces

    Get PDF
    Increasing distributions of the global workforce are leading to collaborative workamong remote coworkers. The emergence of such remote collaborations is essentiallysupported by technology advancements of screen-based devices ranging from tabletor laptop to large displays. However, these devices, especially personal and mobilecomputers, still suffer from certain limitations caused by their form factors, that hinder supporting workspace awareness through non-verbal communication suchas bodily gestures or gaze. This thesis thus aims to design novel interfaces andinteraction techniques to improve remote coworkers’ workspace awareness throughsuch non-verbal cues using 2D interactive surfaces.The thesis starts off by exploring how visual cues support workspace awareness infacilitated brainstorming of hybrid teams of co-located and remote coworkers. Basedon insights from this exploration, the thesis introduces three interfaces for mobiledevices that help users maintain and convey their workspace awareness with their coworkers. The first interface is a virtual environment that allows a remote person to effectively maintain his/her awareness of his/her co-located collaborators’ activities while interacting with the shared workspace. To help a person better express his/her hand gestures in remote collaboration using a mobile device, the second interfacepresents a lightweight add-on for capturing hand images on and above the device’sscreen; and overlaying them on collaborators’ device to improve their workspace awareness. The third interface strategically leverages the entire screen space of aconventional laptop to better convey a remote person’s gaze to his/her co-locatedcollaborators. Building on the top of these three interfaces, the thesis envisions an interface that supports a person using a mobile device to effectively collaborate with remote coworkers working with a large display.Together, these interfaces demonstrate the possibilities to innovate on commodity devices to offer richer non-verbal communication and better support workspace awareness in remote collaboration

    Expanding the bounds of seated virtual workspaces

    Get PDF
    Mixed Reality (MR), Augmented Reality (AR) and Virtual Reality (VR) headsets can improve upon existing physical multi-display environments by rendering large, ergonomic virtual display spaces whenever and wherever they are needed. However, given the physical and ergonomic limitations of neck movement, users may need assistance to view these display spaces comfortably. Through two studies, we developed new ways of minimising the physical effort and discomfort of viewing such display spaces. We first explored how the mapping between gaze angle and display position could be manipulated, helping users view wider display spaces than currently possible within an acceptable and comfortable range of neck movement. We then compared our implicit control of display position based on head orientation against explicit user control, finding significant benefits in terms of user preference, workload and comfort for implicit control. Our novel techniques create new opportunities for productive work by leveraging MR headsets to create interactive wide virtual workspaces with improved comfort and usability. These workspaces are flexible and can be used on-the-go, e.g., to improve remote working or make better use of commuter journeys

    Peripheral Notifications: Effects of Feature Combination and Task Interference

    Get PDF
    Visual notifications are integral to interactive computing systems. The design of visual notifications entails two main considerations: first, visual notifications should be noticeable, as they usually aim to attract a user`s attention to a location away from their main task; second, their noticeability has to be moderated to prevent user distraction and annoyance. Although notifications have been around for a long time on standard desktop environments, new computing environments such as large screens add new factors that have to be taken into account when designing notifications. With large displays, much of the content is in the user's visual periphery, where human capacity to notice visual effects is diminished. One design strategy for enhancing noticeability is to combine visual features, such as motion and colour. Yet little is known about how feature combinations affect noticeability across the visual field, or about how peripheral noticeability changes when a user is working on an attention-demanding task. We addressed these questions by conducting two studies. We conducted a laboratory study that tested people's ability to detect popout targets that used combinations of three visual variables. After determining that the noticeability of feature combinations were approximately equal to the better of the individual features, we designed an experiment to investigate peripheral noticeability and distraction when a user is focusing on a primary task. Our results suggest that there can be interference between the demands of primary tasks and the visual features in the notifications. Furthermore, primary task performance is adversely affected by motion effects in the peripheral notifications. Our studies contribute to a better understanding of how visual features operate when used as peripheral notifications. We provide new insights, both in terms of combining features, and interactions with primary tasks

    Directing Attention in an Augmented Reality Environment: An Attentional Tunneling Evaluation

    Get PDF
    Augmented Reality applications use explicit cuing to support visual search. Explicit cues can help improve visual search performance but they can also cause perceptual issues such as attentional tunneling. An experiment was conducted to evaluate the relationship between directing attention and attentional tunneling, in a dual task structure. One task was tracking a target in motion and the other was detection of non-target elements. Three conditions were tested: baseline without cuing the target, cuing the target with the average scene color, and using a red cue. A different color for the cue was used to vary the attentional tunneling level. The results show that directing attention induced attentional tunneling only the in red condition and that effect is attributable to the color used for the cue

    Multi-person Spatial Interaction in a Large Immersive Display Using Smartphones as Touchpads

    Full text link
    In this paper, we present a multi-user interaction interface for a large immersive space that supports simultaneous screen interactions by combining (1) user input via personal smartphones and Bluetooth microphones, (2) spatial tracking via an overhead array of Kinect sensors, and (3) WebSocket interfaces to a webpage running on the large screen. Users are automatically, dynamically assigned personal and shared screen sub-spaces based on their tracked location with respect to the screen, and use a webpage on their personal smartphone for touchpad-type input. We report user experiments using our interaction framework that involve image selection and placement tasks, with the ultimate goal of realizing display-wall environments as viable, interactive workspaces with natural multimodal interfaces.Comment: 8 pages with reference
    • …
    corecore