197 research outputs found

    Substitutional reality:using the physical environment to design virtual reality experiences

    Get PDF
    Experiencing Virtual Reality in domestic and other uncontrolled settings is challenging due to the presence of physical objects and furniture that are not usually defined in the Virtual Environment. To address this challenge, we explore the concept of Substitutional Reality in the context of Virtual Reality: a class of Virtual Environments where every physical object surrounding a user is paired, with some degree of discrepancy, to a virtual counterpart. We present a model of potential substitutions and validate it in two user studies. In the first study we investigated factors that affect participants' suspension of disbelief and ease of use. We systematically altered the virtual representation of a physical object and recorded responses from 20 participants. The second study investigated users' levels of engagement as the physical proxy for a virtual object varied. From the results, we derive a set of guidelines for the design of future Substitutional Reality experiences

    Gaze-supported gaming: MAGIC techniques for first person shooters

    Get PDF
    MAGIC--Manual And Gaze Input Cascaded-pointing techniques have been proposed as an efficient way in which the eyes can support the mouse input in pointing tasks. MAGIC Sense is one of such techniques in which the cursor speed is modulated by how far it is from the gaze point. In this work, we implemented a continuous and a discrete adaptations of MAGIC Sense for First-Person Shooter input. We evaluated the performance of these techniques in an experiment with 15 participants and found no significant gain in performance, but moderate user preference for the discrete technique

    An empirical characterization of touch-gesture input force on mobile devices

    Get PDF
    Designers of force-sensitive user interfaces lack a ground-truth characterization of input force while performing common touch gestures (zooming, panning, tapping, and rotating). This paper provides such a characterization firstly by deriving baseline force profiles in a tightly-controlled user study; then by examining how these profiles vary in different conditions such as form factor (mobile phone and tablet), interaction position (walking and sitting) and urgency (timed tasks and untimed tasks). We conducted two user studies with 14 and 24 participants respectively and report: (1) force profile graphs that depict the force variations of common touch gestures, (2) the effect of the different conditions on force exerted and gesture completion time, (3) the most common forces that users apply, and the time taken to complete the gestures. This characterization is intended to aid the design of interactive devices that integrate force-input with common touch gestures in different conditions

    Interactions under the desk: a characterisation of foot movements for input in a seated position

    Get PDF
    We characterise foot movements as input for seated users. First, we built unconstrained foot pointing performance models in a seated desktop setting using ISO 9241-9-compliant Fitts’s Law tasks. Second, we evaluated the effect of the foot and direction in one-dimensional tasks, finding no effect of the foot used, but a significant effect of the direction in which targets are distributed. Third, we compared one foot against two feet to control two variables, finding that while one foot is better suited for tasks with a spatial representation that matches its movement, there is little difference between the techniques when it does not. Fourth, we analysed the overhead caused by introducing a feet-controlled variable in a mouse task, finding the feet to be comparable to the scroll wheel. Our results show the feet are an effective method of enhancing our interaction with desktop systems and derive a series of design guidelines

    Feet movement in desktop 3D interaction

    Get PDF
    In this paper we present an exploratory work on the use of foot movements to support fundamental 3D interaction tasks. Depth cameras such as the Microsoft Kinect are now able to track users' motion unobtrusively, making it possible to draw on the spatial context of gestures and movements to control 3D UIs. Whereas multitouch and mid-air hand gestures have been explored extensively for this purpose, little work has looked at how the same can be accomplished with the feet. We describe the interaction space of foot movements in a seated position and propose applications for such techniques in three-dimensional navigation, selection, manipulation and system control tasks in a 3D modelling context. We explore these applications in a user study and discuss the advantages and disadvantages of this modality for 3D UIs

    An empirical investigation of gaze selection in mid-air gestural 3D manipulation

    Get PDF
    In this work, we investigate gaze selection in the context of mid-air hand gestural manipulation of 3D rigid bodies on monoscopic displays. We present the results of a user study with 12 participants in which we compared the performance of Gaze, a Raycasting technique (2D Cursor) and a Virtual Hand technique (3D Cursor) to select objects in two 3D mid-air interaction tasks. Also, we compared selection confirmation times for Gaze selection when selection is followed by manipulation to when it is not. Our results show that gaze selection is faster and more preferred than 2D and 3D mid-air-controlled cursors, and is particularly well suited for tasks in which users constantly switch between several objects during the manipulation. Further, selection confirmation times are longer when selection is followed by manipulation than when it is not

    AmbiGaze:direct control of ambient devices by gaze

    Get PDF
    Eye tracking offers many opportunities for direct device control in smart environments, but issues such as the need for calibration and the Midas touch problem make it impractical. In this paper, we propose AmbiGaze, a smart environment that employs the animation of targets to provide users with direct control of devices by gaze only through smooth pursuit tracking. We propose a design space of means of exposing functionality through movement and illustrate the concept through four prototypes. We evaluated the system in a user study and found that AmbiGaze enables robust gaze-only interaction with many devices, from multiple positions in the environment, in a spontaneous and comfortable manner
    • …
    corecore