433 research outputs found
Agency in mid-air interfaces
Touchless interfaces allow users to view, control and manipulate digital content without physically touching an interface. They are being explored in a wide range of application scenarios from medical surgery to car dashboard controllers. One aspect of touchless interaction that has not been explored to date is the Sense of Agency (SoA). The SoA refers to the subjective experience of voluntary control over actions in the external world. In this paper, we investigated the SoA in touchless systems using the intentional binding paradigm. We first compare touchless systems with physical interactions and then augmented different types of haptic feedback to explore how different outcome modalities influence users’ SoA. From our experiments, we demonstrated that an intentional binding effect is observed in both physical and touchless interactions with no statistical difference. Additionally, we found that haptic and auditory feedback help to increase SoA compared with visual feedback in touchless interfaces. We discuss these findings and identify design opportunities that take agency into consideration
Understanding Visual Feedback in Large-Display Touchless Interactions: An Exploratory Study
Touchless interactions synthesize input and output from physically disconnected motor and display spaces without any haptic feedback. In the absence of any haptic feedback, touchless interactions primarily rely on visual cues, but properties of visual feedback remain unexplored. This paper systematically investigates how large-display touchless interactions are affected by (1) types of visual feedback—discrete, partial, and continuous; (2) alternative forms of touchless cursors; (3) approaches to visualize target-selection; and (4) persistent visual cues to support out-of-range and drag-and-drop gestures. Results suggest that continuous was more effective than partial visual feedback; users disliked opaque cursors, and efficiency did not increase when cursors were larger than display artifacts’ size. Semantic visual feedback located at the display border improved users’ efficiency to return within the display range; however, the path of movement echoed in drag-and-drop operations decreased efficiency. Our findings contribute key ingredients to design suitable visual feedback for large-display touchless environments.This work was partially supported by an IUPUI Research Support Funds Grant (RSFG)
Understanding interaction mechanics in touchless target selection
Indiana University-Purdue University Indianapolis (IUPUI)We use gestures frequently in daily life—to interact with people, pets, or objects. But interacting with computers using mid-air gestures continues to challenge the design of touchless systems. Traditional approaches to touchless interaction focus on exploring gesture inputs and evaluating user interfaces. I shift the focus from gesture elicitation and interface evaluation to touchless interaction mechanics. I argue for a novel approach to generate design guidelines for touchless systems: to use fundamental interaction principles, instead of a reactive adaptation to the sensing technology. In five sets of experiments, I explore visual and pseudo-haptic feedback, motor intuitiveness, handedness, and perceptual Gestalt effects. Particularly, I study the interaction mechanics in touchless target selection. To that end, I introduce two novel interaction techniques: touchless circular menus that allow command selection using directional strokes and interface topographies that use pseudo-haptic feedback to guide steering–targeting tasks. Results illuminate different facets of touchless interaction mechanics. For example, motor-intuitive touchless interactions explain how our sensorimotor abilities inform touchless interface affordances: we often make a holistic oblique gesture instead of several orthogonal hand gestures while reaching toward a distant display. Following the Gestalt theory of visual perception, we found similarity between user interface (UI) components decreased user accuracy while good continuity made users faster. Other findings include hemispheric asymmetry affecting transfer of training between dominant and nondominant hands and pseudo-haptic feedback improving touchless accuracy. The results of this dissertation contribute design guidelines for future touchless systems. Practical applications of this work include the use of touchless interaction techniques in various domains, such as entertainment, consumer appliances, surgery, patient-centric health settings, smart cities, interactive visualization, and collaboration
A Thematic and Reference Analysis of Touchless Technologies
The purpose of this research is to explore the utility and current state of touchless technologies. Five categories of technologies are identified as a result of collecting and reviewing literature: facial/biometric recognition, gesture recognition, touchless sensing, personal devices, and voice recognition. A thematic analysis was conducted to evaluate the advantages and disadvantages of the five categories. A reference analysis was also conducted to determine the similarities between articles in each category. Touchless sensing showed to have the most advantages and least similar references. Gesture recognition was the opposite. Comparing analyses shows more reliable technology types are more beneficial and diverse
Recommended from our members
Examining the sense of agency in human-computer interaction
Humans are agents, we feel that we control the course of events on our everyday life. This refers to the Sense of Agency (SoA). This experience is not only crucial in our daily life, but also in our interaction with technology. When we manipulate a user interface (e.g., computer, smartphone, etc.), we expect that the system responds to our input commands with feedback, as we desire to feel that we are in charge of the interaction. If this interplay elicits a SoA, then the user will perceive an instinctive feeling of “I am controlling this”. Although research in Human-Computer Interaction (HCI) pursuits the design of intuitive and responsive systems, most of the current studies have been focussed mainly on interaction techniques (e.g., software-hardware) and User Experience (UX) (e.g., comfort, usability, etc.), and very little has been investigated in terms of the SoA i.e., the conscious experience of being in control regarding the interaction. In this thesis, we present an experimental exploration of the role of the SoA in interaction paradigms typical of HCI. After two chapters of introduction and related work, we describe a series of studies that explore agency implication in interaction with systems through human senses such as vision, audio, touch and smell. Chapter 3 explores the SoA in mid-air haptic interaction through touchless actions. Then, Chapter 4 examines agency modulation through smell and its application for olfactory interfaces. Chapter 5 describes two novel timing techniques based on auditory and haptic cues that provide alternative timing methods to the traditional Libet clock. Finally, we conclude with a discussion chapter that highlights the importance of our SoA during interactions with technology as well as the implications of the results found, in the design of user interfaces
The Leap Motion Movement for 2D Pointing Tasks: Characterisation and Comparison to Other Devices
In this paper we present the results from an experiment designed to characterize the Leap Motion movement in 2D pointing tasks and compare it to a mouse and touchpad. We used the ISO 9241-9 multi-directional tapping test for comparing the devices, and we analyse the results using standard throughput and error rate measures as well as additional accuracy measures such as target re-entry, task axis crossing, movement
direction change, orthogonal direction change, movement variability, movement offset, and movement error. We also present the results from the ISO9241-9 assessment of comfort questionnaire, and our observations from the participant’s postures when using the Leap Motion device.
Results indicate that the Leap Motion performs poorly in these tasks when compared to a mouse or touchpad
An Augmented Reality Platform for Preoperative Surgical Planning
Researching in new technologies for diagnosis, planning and medical treatment have allowed the development of computer tools that provide new ways of representing data obtained from patient's medical images such as computed tomography (CT) and magnetic resonance imaging (MRI). In this sense, augmented reality (AR) technologies provide a new form of data representation by combining the common analysis using images and the ability to superimpose virtual 3D representations of the organs of the human body in the real environment. In this paper the development of a generic computer platform based on augmented reality technology for surgical preoperative planning is presented. In particular, the surgeon can navigate in the 3D models of the patient's organs in order to have the possibility to perfectly understand the anatomy and plan in the best way the surgical procedure. In addition, a touchless interaction with the virtual organs is available thanks to the use of an armband provided of electromiographic muscle sensors. To validate the system, we focused in a navigation through aorta artery for mitral valve repair surgery
- …