22,878 research outputs found
Multibody Dynamics Model of a Human Hand for Haptics Interaction
In this paper we propose a strategy for modelling a human hand for Haptics interaction. The strategy consists in a parallel computing architecture that calculates the dynamics of a hand; this is accomplished by computing the dynamics of each finger in a parallel manner. In this approach multiple threads (e.g. haptics thread, graphics thread, collision detection thread, etc.) run concurrently and therefore we developed a synchronization mechanism for data exchange. We describe in detail the elements of the developed software
Comparing two haptic interfaces for multimodal graph rendering
This paper describes the evaluation of two multimodal interfaces designed to provide visually impaired people with access to various types of graphs. The interfaces consist of audio and haptics which is rendered on commercially available force feedback devices. This study compares the usability of two force feedback devices: the SensAble PHANToM and the Logitech WingMan force feedback mouse in representing graphical data. The type of graph used in the experiment is the bar chart under two experimental conditions: single mode and multimodal. The results show that PHANToM provides better performance in the haptic only condition. However, no significant difference has been found between the two devices in the multimodal condition. This has confirmed the advantages of using multimodal approach in our research and that low-cost haptic devices can be successful. This paper introduces our evaluation approach and discusses the findings of the experiment
Mid-air haptic rendering of 2D geometric shapes with a dynamic tactile pointer
An important challenge that affects ultrasonic midair haptics, in contrast to physical touch, is that we lose certain exploratory procedures such as contour following. This makes the task of perceiving geometric properties and shape identification more difficult. Meanwhile, the growing interest in mid-air haptics and their application to various new areas requires an improved understanding of how we perceive specific haptic stimuli, such as icons and control dials in mid-air. We address this challenge
by investigating static and dynamic methods of displaying 2D geometric shapes in mid-air. We display a circle, a square, and a triangle, in either a static or dynamic condition, using ultrasonic mid-air haptics. In the static condition, the shapes are presented as a full outline in mid-air, while in the dynamic condition, a tactile pointer is moved around the perimeter of the shapes. We measure participants’ accuracy and confidence of identifying
shapes in two controlled experiments (n1 = 34, n2 = 25). Results reveal that in the dynamic condition people recognise shapes significantly more accurately, and with higher confidence. We also find that representing polygons as a set of individually drawn haptic strokes, with a short pause at the corners, drastically enhances shape recognition accuracy. Our research supports the design of mid-air haptic user interfaces in application scenarios
such as in-car interactions or assistive technology in education
Hands-on haptics: exploring non-visual visualization using the sense of touch
No abstract available
Communicating with feeling
Communication between users in shared editors takes place in a deprived environment - distributed users find it difficult to communicate. While many solutions to the problems this causes have been suggested this paper presents a novel one. It describes one possible use of haptics as a channel for communication between users. User's telepointers are considered as haptic avatars and interactions such as haptically pushing and pulling each other are afforded. The use of homing forces to locate other users is also discussed, as is a proximity sensation based on viscosity. Evaluation of this system is currently underway
Multimodal virtual reality versus printed medium in visualization for blind people
In this paper, we describe a study comparing the strengths of a multimodal Virtual Reality (VR) interface against traditional tactile diagrams in conveying information to visually impaired and blind people. The multimodal VR interface consists of a force feedback device (SensAble PHANTOM), synthesized speech and non-speech audio. Potential advantages of the VR technology are well known however its real usability in comparison with the conventional paper-based medium is seldom investigated. We have addressed this issue in our evaluation. The experimental results show benefits from using the multimodal approach in terms of more accurate information about the graphs obtained by users
- …
