13,082 research outputs found

    Can Gaze Beat Touch? A Fitts' Law Evaluation of Gaze, Touch, and Mouse Inputs

    Full text link
    Gaze input has been a promising substitute for mouse input for point and select interactions. Individuals with severe motor and speech disabilities primarily rely on gaze input for communication. Gaze input also serves as a hands-free input modality in the scenarios of situationally-induced impairments and disabilities (SIIDs). Hence, the performance of gaze input has often been compared to mouse input through standardized performance evaluation procedure like the Fitts' Law. With the proliferation of touch-enabled devices such as smartphones, tablet PCs, or any computing device with a touch surface, it is also important to compare the performance of gaze input to touch input. In this study, we conducted ISO 9241-9 Fitts' Law evaluation to compare the performance of multimodal gaze and foot-based input to touch input in a standard desktop environment, while using mouse input as the baseline. From a study involving 12 participants, we found that the gaze input has the lowest throughput (2.55 bits/s), and the highest movement time (1.04 s) of the three inputs. In addition, though touch input involves maximum physical movements, it achieved the highest throughput (6.67 bits/s), the least movement time (0.5 s), and was the most preferred input. While there are similarities in how quickly pointing can be moved from source to target location when using both gaze and touch inputs, target selection consumes maximum time with gaze input. Hence, with a throughput that is over 160% higher than gaze, touch proves to be a superior input modality

    Novel Interaction Techniques for Mobile Augmented Reality applications. A Systematic Literature Review

    Get PDF
    This study reviews the research on interaction techniques and methods that could be applied in mobile augmented reality scenarios. The review is focused on themost recent advances and considers especially the use of head-mounted displays. Inthe review process, we have followed a systematic approach, which makes the reviewtransparent, repeatable, and less prone to human errors than if it was conducted in amore traditional manner. The main research subjects covered in the review are headorientation and gaze-tracking, gestures and body part-tracking, and multimodality– as far as the subjects are related to human-computer interaction. Besides these,also a number of other areas of interest will be discussed.Siirretty Doriast

    RubikAuth: Fast and Secure Authentication in Virtual Reality

    Get PDF
    There is a growing need for usable and secure authentication in virtual reality (VR). Established concepts (e.g., 2D graphical PINs) are vulnerable to observation attacks, and proposed alternatives are relatively slow. We present RubikAuth, a novel authentication scheme for VR where users authenticate quickly by selecting digits from a virtual 3Dcube that is manipulated with a handheld controller. We report two studies comparing how pointing using gaze, headpose, and controller tapping impacts RubikAuth’s usability and observation resistance under three realistic threat models. Entering a four-symbol RubikAuth password is fast:1.69 s to 3.5 s using controller tapping, 2.35 s to 4.68 s using head pose, and 2.39 s to 4.92 s using gaze and highly resilient to observations; 97.78% to 100% of observation attacks were unsuccessful. Our results suggest that providing attackers with support material contributes to more realistic security evaluations

    Interactive form creation: exploring the creation and manipulation of free form through the use of interactive multiple input interface

    Get PDF
    Most current CAD systems support only the two most common input devices: a mouse and a keyboard that impose a limit to the degree of interaction that a user can have with the system. However, it is not uncommon for users to work together on the same computer during a collaborative task. Beside that, people tend to use both hands to manipulate 3D objects; one hand is used to orient the object while the other hand is used to perform some operation on the object. The same things could be applied to computer modelling in the conceptual phase of the design process. A designer can rotate and position an object with one hand, and manipulate the shape [deform it] with the other hand. Accordingly, the 3D object can be easily and intuitively changed through interactive manipulation of both hands.The research investigates the manipulation and creation of free form geometries through the use of interactive interfaces with multiple input devices. First the creation of the 3D model will be discussed; several different types of models will be illustrated. Furthermore, different tools that allow the user to control the 3D model interactively will be presented. Three experiments were conducted using different interactive interfaces; two bi-manual techniques were compared with the conventional one-handed approach. Finally it will be demonstrated that the use of new and multiple input devices can offer many opportunities for form creation. The problem is that few, if any, systems make it easy for the user or the programmer to use new input devices

    Large Perceptual Distortions Of Locomotor Action Space Occur In Ground-Based Coordinates: Angular Expansion And The Large-Scale Horizontal-Vertical Illusion

    Get PDF
    What is the natural reference frame for seeing large-scale spatial scenes in locomotor action space? Prior studies indicate an asymmetric angular expansion in perceived direction in large-scale environments: Angular elevation relative to the horizon is perceptually exaggerated by a factor of 1.5, whereas azimuthal direction is exaggerated by a factor of about 1.25. Here participants made angular and spatial judgments when upright or on their sides to dissociate egocentric from allocentric reference frames. In Experiment 1, it was found that body orientation did not affect the magnitude of the up-down exaggeration of direction, suggesting that the relevant orientation reference frame for this directional bias is allocentric rather than egocentric. In Experiment 2, the comparison of large-scale horizontal and vertical extents was somewhat affected by viewer orientation, but only to the extent necessitated by the classic (5%) horizontal-vertical illusion (HVI) that is known to be retinotopic. Large-scale vertical extents continued to appear much larger than horizontal ground extents when observers lay sideways. When the visual world was reoriented in Experiment 3, the bias remained tied to the ground-based allocentric reference frame. The allocentric HVI is quantitatively consistent with differential angular exaggerations previously measured for elevation and azimuth in locomotor space. (PsycINFO Database Record (c) 2016 APA, all rights reserved

    A Body-and-Mind-Centric Approach to Wearable Personal Assistants

    Get PDF

    Augmentative and Alternative Communication in the Intensive Care Unit: A Service Delivery Model

    Get PDF
    Patients in the intensive care unit (ICU) often find it difficult or impossible to verbally communicate due to mechanical ventilation, tracheostomy tubes or increased fatigue and delirium. Augmentative and alternative communication (AAC) can provide ICU patients with a way to communicate during their ICU admittance. However, few hospitals currently have a systematic service delivery model in place for providing AAC tools and supports to ICU patients. This resource manual provides an outline for creating and implementing an AAC service delivery model along with AAC materials and resources appropriate for an ICU. Explanations of how each material is used, who they are appropriate for and how they can modify are provided for each AAC method discussed. Providing a detailed and systematic AAC service delivery model, such as the one outlined in this resource manual, allows ICU patients to effectively and efficiently communicate during a frightening and anxiety-provoking time
    • …
    corecore