102 research outputs found
Recommended from our members
An evaluation of discrete and continuous mid-air loop and marking menu selection in optical see-through HMDs
© 2019 Copyright held by the owner/author(s). This paper investigates discrete and continuous hand-drawn loops and marks in mid-air as a selection input for gesture-based menu systemsonoptical see-through head-mounteddisplays (OST HMDs). We explore two fundamental methods of providing menu selection: the marking menu and the loop menu, and a hybrid method which combines the two. The loop menu design uses a selection mechanism with loops to approximate directional selections in a menu system. We evaluate the merits of loop and marking menu selection in an experiment with two phases and report that 1) the loop-based selection mechanism provides smooth and effective interaction; 2) users prioritize accuracy and comfort over speed for mid-air gestures; 3) users can exploit the flexibility of a final hybrid marking/loop menu design; and, finally, 4) users tend to chunk gestures depending on the selection task and their level of familiarity with the menu layout
An investigation of mid-air gesture interaction for older adults
Older adults (60+) face natural and gradual decline in cognitive, sensory and motor functions that are often the reason for the difficulties that older users come up against when interacting with computers. For that reason, the investigation and design of age-inclusive input methods for computer interaction is much needed and relevant due to an ageing population. The advances of motion sensing technologies and mid-air gesture interaction reinvented how individuals can interact with computer interfaces and this modality of input method is often deemed as a more “natural” and “intuitive” than using purely traditional input devices such mouse interaction. Although explored in gaming and entertainment, the suitability of mid-air gesture interaction for older users in particular is still little known. The purpose of this research is to investigate the potential of mid-air gesture interaction to facilitate computer use for older users, and to address the challenges that older adults may face when interacting with gestures in mid-air. This doctoral research is presented as a collection of papers that, together, develop the topic of ageing and computer interaction through mid-air gestures. The initial point for this research was to establish how older users differ from younger users and focus on the challenges faced by older adults when interacting with mid-air gesture interaction. Once these challenges were identified, this work aimed to explore a series of usability challenges and opportunities to further develop age-inclusive interfaces based on mid-air gesture interaction. Through a series of empirical studies, this research intends to provide recommendations for designing mid-air gesture interaction that better take into consideration the needs and skills of the older population and aims to contribute to the advance of age-friendly interfaces
Barehand Mode Switching in Touch and Mid-Air Interfaces
Raskin defines a mode as a distinct setting within an interface where the same user input will produce results different to those it would produce in other settings. Most interfaces have multiple modes in which input is mapped to different actions, and, mode-switching is simply the transition from one mode to another. In touch interfaces, the current mode can change how a single touch is interpreted: for example, it could draw a line, pan the canvas, select a shape, or enter a command. In Virtual Reality (VR), a hand gesture-based 3D modelling application may have different modes for object creation, selection, and transformation. Depending on the mode, the movement of the hand is interpreted differently. However, one of the crucial factors determining the effectiveness of an interface is user productivity. Mode-switching time of different input techniques, either in a touch interface or in a mid-air interface, affects user productivity. Moreover, when touch and mid-air interfaces like VR are combined, making informed decisions pertaining to the mode assignment gets even more complicated. This thesis provides an empirical investigation to characterize the mode switching phenomenon in barehand touch-based and mid-air interfaces. It explores the potential of using these input spaces together for a productivity application in VR. And, it concludes with a step towards defining and evaluating the multi-faceted mode concept, its characteristics and its utility, when designing user interfaces more generally
Summon and Select: Rapid Interaction with Interface Controls in Mid-air
International audienceCurrent freehand interactions with large displays rely on point & select as the dominant paradigm. However, constant hand movement in air for pointer navigation leads to hand fatigue quickly. We introduce summon & select, a new model for freehand interaction where, instead of navigating to the control , the user summons it into focus and then manipulates it. Summon & select solves the problems of constant pointer navigation, need for precise selection, and out-of-bounds gestures that plague point & select. We describe the design and conduct two studies to evaluate the design and compare it against point & select in a multi-button selection study. The results show that summon & select is significantly faster and has less physical and mental demand than point & select
Usability Analysis of an off-the-shelf Hand Posture Estimation Sensor for Freehand Physical Interaction in Egocentric Mixed Reality
This paper explores freehand physical interaction in egocentric
Mixed Reality by performing a usability study on the use of hand
posture estimation sensors. We report on precision, interactivity
and usability metrics in a task-based user study, exploring the importance
of additional visual cues when interacting. A total of 750
interactions were recorded from 30 participants performing 5 different
interaction tasks (Move, Rotate: Pitch (Y axis) and Yaw (Z
axis), Uniform scale: enlarge and shrink). Additional visual cues
resulted in an average shorter time to interact, however, no consistent
statistical differences were found in between groups for performance
and precision results. The group with additional visual cues gave the
system and average System Usability Scale (SUS) score of 72.33
(SD = 16.24) while the other scored a 68.0 (SD = 18.68). Overall,
additional visual cues made the system being perceived as more
usable, despite the fact that the use of these two different conditions
had limited effect on precision and interactivity metrics
Interaction Methods for Smart Glasses : A Survey
Since the launch of Google Glass in 2014, smart glasses have mainly been designed to support micro-interactions. The ultimate goal for them to become an augmented reality interface has not yet been attained due to an encumbrance of controls. Augmented reality involves superimposing interactive computer graphics images onto physical objects in the real world. This survey reviews current research issues in the area of human-computer interaction for smart glasses. The survey first studies the smart glasses available in the market and afterwards investigates the interaction methods proposed in the wide body of literature. The interaction methods can be classified into hand-held, touch, and touchless input. This paper mainly focuses on the touch and touchless input. Touch input can be further divided into on-device and on-body, while touchless input can be classified into hands-free and freehand. Next, we summarize the existing research efforts and trends, in which touch and touchless input are evaluated by a total of eight interaction goals. Finally, we discuss several key design challenges and the possibility of multi-modal input for smart glasses.Peer reviewe
HandPainter – 3D sketching in VR with hand-based physical proxy
3D sketching in virtual reality (VR) enables users to create 3D virtual objects intuitively and immersively. However, previous studies showed that mid-air drawing may lead to inaccurate sketches. To address this issue, we propose to use one hand as a canvas proxy and the index finger of the other hand as a 3D pen. To this end, we first perform a formative study to compare two-handed interaction with tablet-pen interaction for VR sketching. Based on the findings of this study, we design HandPainter, a VR sketching system which focuses on the direct use of two hands for 3D sketching without requesting any tablet, pen, or VR controller. Our implementation is based on a pair of VR gloves, which provide hand tracking and gesture capture. We devise a set of intuitive gestures to control various functionalities required during 3D sketching, such as canvas panning and drawing positioning. We show the effectiveness of HandPainter by presenting a number of sketching results and discussing the outcomes of a user study-based comparison with mid-air drawing and tablet-based sketching tools
- …