4 research outputs found

    Reality Anchors: Bringing Cues from Reality to Increase Acceptance of Immersive Technologies in Transit

    Get PDF
    Immersive technologies allow us to control and customise how we experience reality, but are not widely used in transit due to safety, social acceptability, and comfort barriers. We propose that cues from reality can create reference points in virtuality, which we call Reality Anchors, will reduce these barriers. We used simulated public transportation journeys in a lab setting to explore Reality Anchors using speculative methods in two studies. Our first study (N=20) explored how elements of reality like objects, furniture, and people could be used as anchors, demonstrating that visibility of other passengers and personal belongings could reduce barriers. Our second study (N=19) focused on journey types that emerged from the first study - self-managed vs. externally managed journeys - revealing that self-managed journeys increased the need for anchors. We conclude that Reality Anchors can reduce concerns associated with immersive technology use in transit, especially for self-managed journeys

    Gaze Awareness in Computer-Mediated Collaborative Physical Tasks

    Get PDF
    Human eyes play an important role in everyday social interactions. However, the cues provided by eye movements are often missing or difficult to interpret in computer-mediated remote collaboration. Motivated by the increasing availability of gaze-tracking devices in the consumer market and the growing need for improved remotecollaboration systems, this thesis evaluated the value of gaze awareness in a number of video-based remote-collaboration situations. This thesis comprises six publications which enhance our understanding of the everyday use of gaze-tracking technology and the value of shared gaze to remote collaborations in the physical world. The studies focused on a variety of collaborative scenarios involving different camera configurations (stationary, handheld, and head-mounted cameras), display setups (screen-based and projection displays), mobility requirements (stationary and mobile tasks), and task characteristics (pointing and procedural tasks). The aim was to understand the costs and benefits of shared gaze in video-based collaborative physical tasks. The findings suggest that gaze awareness is useful in remote collaboration for physical tasks. Shared gaze enables efficient communication of spatial information, helps viewers to predict task-relevant intentions, and enables improved situational awareness. However, different contextual factors can influence the utility of shared gaze. Shared gaze was more useful when the collaborative task involved communicating pointing information instead of procedural information, the collaborators were mutually aware of the shared gaze, and the quality of gaze-tracking was accurate enough to meet the task requirements. In addition, the results suggest that the collaborators’ roles can also affect the perceived utility of shared gaze. Methodologically, this thesis sets a precedent in shared gaze research by reporting the objective gaze data quality achieved in the studies and also provides tools for other researchers to objectively view gaze data quality in different research phases. The findings of this thesis can contribute towards designing future remote-collaboration systems; towards the vision of pervasive gaze-based interaction; and towards improved validity, repeatability, and comparability of research involving gaze trackers

    Haptic feedback to gaze events

    Get PDF
    Eyes are the window to the world, and most of the input from the surrounding environment is captured through the eyes. In Human-Computer Interaction too, gaze based interactions are gaining prominence, where the user’s gaze acts as an input to the system. Of late portable and inexpensive eye-tracking devices have made inroads in the market, opening up wider possibilities for interacting with a gaze. However, research on feedback to the gaze-based events is limited. This thesis proposes to study vibrotactile feedback to gaze-based interactions. This thesis presents a study conducted to evaluate different types of vibrotactile feedback and their role in response to a gaze-based event. For this study, an experimental setup was designed wherein when the user fixated the gaze on a functional object, vibrotactile feedback was provided either on the wrist or on the glasses. The study seeks to answer questions such as the helpfulness of vibrotactile feedback in identifying functional objects, user preference for the type of vibrotactile feedback, and user preference of the location of the feedback. The results of this study indicate that vibrotactile feedback was an important factor in identifying the functional object. The preference for the type of vibrotactile feedback was somewhat inconclusive as there were wide variations among the users over the type of vibrotactile feedback. The personal preference largely influenced the choice of location for receiving the feedback

    Understanding Mode and Modality Transfer in Unistroke Gesture Input

    Get PDF
    Unistroke gestures are an attractive input method with an extensive research history, but one challenge with their usage is that the gestures are not always self-revealing. To obtain expertise with these gestures, interaction designers often deploy a guided novice mode -- where users can rely on recognizing visual UI elements to perform a gestural command. Once a user knows the gesture and associated command, they can perform it without guidance; thus, relying on recall. The primary aim of my thesis is to obtain a comprehensive understanding of why, when, and how users transfer from guided modes or modalities to potentially more efficient, or novel, methods of interaction -- through symbolic-abstract unistroke gestures. The goal of my work is to not only study user behaviour from novice to more efficient interaction mechanisms, but also to expand upon the concept of intermodal transfer to different contexts. We garner this understanding by empirically evaluating three different use cases of mode and/or modality transitions. Leveraging marking menus, the first piece investigates whether or not designers should force expertise transfer by penalizing use of the guided mode, in an effort to encourage use of the recall mode. Second, we investigate how well users can transfer skills between modalities, particularly when it is impractical to present guidance in the target or recall modality. Lastly, we assess how well users' pre-existing spatial knowledge of an input method (the QWERTY keyboard layout), transfers to performance in a new modality. Applying lessons from these three assessments, we segment intermodal transfer into three possible characterizations -- beyond the traditional novice to expert contextualization. This is followed by a series of implications and potential areas of future exploration spawning from our work
    corecore