4,314 research outputs found
How Haptic Size Sensations Improve Distance Perception
Determining distances to objects is one of the most ubiquitous perceptual tasks in everyday life. Nevertheless, it is challenging because the information from a single image confounds object size and distance. Though our brains frequently judge distances accurately, the underlying computations employed by the brain are not well understood. Our work illuminates these computions by formulating a family of probabilistic models that encompass a variety of distinct hypotheses about distance and size perception. We compare these models' predictions to a set of human distance judgments in an interception experiment and use Bayesian analysis tools to quantitatively select the best hypothesis on the basis of its explanatory power and robustness over experimental data. The central question is: whether, and how, human distance perception incorporates size cues to improve accuracy. Our conclusions are: 1) humans incorporate haptic object size sensations for distance perception, 2) the incorporation of haptic sensations is suboptimal given their reliability, 3) humans use environmentally accurate size and distance priors, 4) distance judgments are produced by perceptual âposterior samplingâ. In addition, we compared our model's estimated sensory and motor noise parameters with previously reported measurements in the perceptual literature and found good correspondence between them. Taken together, these results represent a major step forward in establishing the computational underpinnings of human distance perception and the role of size information.National Institutes of Health (U.S.) (NIH grant R01EY015261)University of Minnesota (UMN Graduate School Fellowship)National Science Foundation (U.S.) (Graduate Research Fellowship)University of Minnesota (UMN Doctoral Dissertation Fellowship)National Institutes of Health (U.S.) (NIH NRSA grant F32EY019228-02)Ruth L. Kirschstein National Research Service Awar
Touching the invisible: Localizing ultrasonic haptic cues
While mid-air gestures offer new possibilities to interact with or around devices, some situations, such as interacting with applications, playing games or navigating, may require visual attention to be focused on a main task. Ultrasonic haptic feedback can provide 3D spatial haptic cues that do not demand visual attention for these contexts. In this paper, we present an initial study of active exploration of ultrasonic haptic virtual points that investigates the spatial localization with and without the use of the visual modality. Our results show that, when providing haptic feedback giving the location of a widget, users perform 50% more accurately compared to providing visual feedback alone. When provided with a haptic location of a widget alone, users are more than 30% more accurate than when given a visual location. When aware of the location of the haptic feedback, active exploration decreased the minimum recommended widget size from 2cm2 to 1cm2 when compared to passive exploration from previous studies. Our results will allow designers to create better mid-air interactions using this new form of haptic feedback
Mid-air haptic rendering of 2D geometric shapes with a dynamic tactile pointer
An important challenge that affects ultrasonic midair haptics, in contrast to physical touch, is that we lose certain exploratory procedures such as contour following. This makes the task of perceiving geometric properties and shape identification more difficult. Meanwhile, the growing interest in mid-air haptics and their application to various new areas requires an improved understanding of how we perceive specific haptic stimuli, such as icons and control dials in mid-air. We address this challenge
by investigating static and dynamic methods of displaying 2D geometric shapes in mid-air. We display a circle, a square, and a triangle, in either a static or dynamic condition, using ultrasonic mid-air haptics. In the static condition, the shapes are presented as a full outline in mid-air, while in the dynamic condition, a tactile pointer is moved around the perimeter of the shapes. We measure participantsâ accuracy and confidence of identifying
shapes in two controlled experiments (n1 = 34, n2 = 25). Results reveal that in the dynamic condition people recognise shapes significantly more accurately, and with higher confidence. We also find that representing polygons as a set of individually drawn haptic strokes, with a short pause at the corners, drastically enhances shape recognition accuracy. Our research supports the design of mid-air haptic user interfaces in application scenarios
such as in-car interactions or assistive technology in education
Personalising Vibrotactile Displays through Perceptual Sensitivity Adjustment
Haptic displays are commonly limited to transmitting a discrete
set of tactile motives. In this paper, we explore the
transmission of real-valued information through vibrotactile
displays. We simulate spatial continuity with three perceptual
models commonly used to create phantom sensations: the linear,
logarithmic and power model. We show that these generic
models lead to limited decoding precision, and propose a
method for model personalization adjusting to idiosyncratic
and spatial variations in perceptual sensitivity. We evaluate
this approach using two haptic display layouts: circular, worn
around the wrist and the upper arm, and straight, worn along
the forearm. Results of a user study measuring continuous
value decoding precision show that users were able to decode
continuous values with relatively high accuracy (4.4% mean
error), circular layouts performed particularly well, and personalisation
through sensitivity adjustment increased decoding
precision
Towards human technology symbiosis in the haptic mode
Search and rescue operations are often undertaken in dark and noisy environments in which rescue teams must rely on haptic feedback for exploration and safe exit. However, little attention has been paid specifically to haptic sensitivity in such contexts or to the possibility of enhancing communicational proficiency in the haptic mode as a life-preserving measure. Here we discuss the design of a haptic guide robot, inspired by careful study of the communication between blind person and guide dog. In the case of this partnership, the development of a symbiotic relationship between person and dog, based on mutual trust and confidence, is a prerequisite for successful task performance. We argue that a human-technology symbiosis is equally necessary and possible in the case of the robot guide. But this is dependent on the robot becoming 'transparent technology' in Andy Clark's sense. We report on initial haptic mode experiments in which a person uses a simple mobile mechanical device (a metal disk fixed with a rigid handle) to explore the immediate environment. These experiments demonstrate the extreme sensitivity and trainability of haptic communication and the speed with which users develop and refine their haptic proficiencies in using the device, permitting reliable and accurate discrimination between objects of different weights. We argue that such trials show the transformation of the mobile device into a transparent information appliance and the beginnings of the development of a symbiotic relationship between device and human user. We discuss how these initial explorations may shed light on the more general question of how a human mind, on being exposed to an unknown environment, may enter into collaboration with an external information source in order to learn about, and navigate, that environment
Augmenting Graphical User Interfaces with Haptic Assistance for Motion-Impaired Operators
Haptic assistance is an emerging field of research that is designed to improve human-computer interaction (HCI) by reducing error rates and targeting times through the use of force feedback. Haptic feedback has previously been investigated to assist motion-impaired computer users, however, limitations such as target distracters have hampered its integration with graphical user interfaces (GUIs). In this paper two new haptic assistive techniques are presented that utilise the 3DOF capabilities of the Phantom Omni. These are referred to as deformable haptic cones and deformable virtual switches. The assistance is designed specifically to enable motion-impaired operators to use existing GUIs more effectively. Experiment 1 investigates the performance benefits of the new haptic techniques when used in conjunction with the densely populated Windows on-screen keyboard (OSK). Experiment 2 utilises the ISO 9241-9 point-and-click task to investigate the effects of target size and shape. The results of the study prove that the newly proposed techniques improve interaction rates and can be integrated with existing software without many of the drawbacks of traditional haptic assistance. Deformable haptic cones and deformable virtual switches were shown to reduce the mean number of missed-clicks by at least 75% and reduce targeting times by at least 25%
Evaluation of Presence in Virtual Environments: Haptic Vest and User's Haptic Skills
This paper presents the integration of a haptic vest with a multimodal virtual environment, consisting of video, audio, and haptic feedback, with the main objective of determining how users, who interact with the virtual environment, benefit from tactile and thermal stimuli provided by the haptic vest. Some experiments are performed using a game application of a train station after an explosion. The participants of this experiment have to move inside the environment, while receiving several stimuli to check if any improvement in presence or realism in that environment is reflected on the vest. This is done by comparing the experimental results with those similar scenarios, obtained without haptic feedback. These experiments are carried out by three groups of participants who are classified on the basis of their experience in haptics and virtual reality devices. Some differences among the groups have been found, which can be related to the levels of realism and synchronization of all the elements in the multimodal environment that fulfill the expectations and maximum satisfaction level. According to the participants in the experiment, two different levels of requirements are to be defined by the system to comply with the expectations of professional and conventional users
- âŠ