34 research outputs found

    Exploring computer-generated line graphs through virtual touch

    Get PDF
    This paper describes the development and evaluation of a haptic interface designed to provide access to line graphs for blind or visually impaired people. Computer-generated line graphs can be felt by users through the sense of touch produced by a PHANToM force feedback device. Experiments have been conducted to test the effectiveness of this interface with both sighted and blind people. The results show that sighted and blind people have achieved about 89.95% and 86.83% correct answers respectively in the experiment

    Comparing two haptic interfaces for multimodal graph rendering

    Get PDF
    This paper describes the evaluation of two multimodal interfaces designed to provide visually impaired people with access to various types of graphs. The interfaces consist of audio and haptics which is rendered on commercially available force feedback devices. This study compares the usability of two force feedback devices: the SensAble PHANToM and the Logitech WingMan force feedback mouse in representing graphical data. The type of graph used in the experiment is the bar chart under two experimental conditions: single mode and multimodal. The results show that PHANToM provides better performance in the haptic only condition. However, no significant difference has been found between the two devices in the multimodal condition. This has confirmed the advantages of using multimodal approach in our research and that low-cost haptic devices can be successful. This paper introduces our evaluation approach and discusses the findings of the experiment

    Multimodal virtual reality versus printed medium in visualization for blind people

    Get PDF
    In this paper, we describe a study comparing the strengths of a multimodal Virtual Reality (VR) interface against traditional tactile diagrams in conveying information to visually impaired and blind people. The multimodal VR interface consists of a force feedback device (SensAble PHANTOM), synthesized speech and non-speech audio. Potential advantages of the VR technology are well known however its real usability in comparison with the conventional paper-based medium is seldom investigated. We have addressed this issue in our evaluation. The experimental results show benefits from using the multimodal approach in terms of more accurate information about the graphs obtained by users

    Design and Development of a Multimodal Vest for Virtual Immersion and Guidance

    Get PDF
    This paper is focused on the development of a haptic vest to enhance immersion and realism in virtual environments, through vibrotactile feedback. The first steps to achieve touch-based communication are presented in order to set an actuation method based on vibration motors. Resulting vibrotactile patterns helping users to move inside virtual reality (VR). The research investigates human torso resolution and perception of vibration patterns, evaluating different kind of actuators at different locations on the vest. Finally, determining an appropriate distribution of vibration patterns allowed the generation of sensations that, for instance, help to guide in a mixed or virtual reality environment

    Visualization tools for blind people using multiple modalities

    Get PDF
    Purpose: There are many problems when blind people need to access visualizations such as graphs and tables. Current speech or raised-paper technology does not provide a good solution. Our approach is to use non-speech sounds and haptics to allow a richer and more flexible form of access to graphs and tables. Method: Two experiments are reported that test out designs for both sound and haptic graph solutions. In the audio case a standard speech interface is compared to one with non-speech sounds added. The haptic experiment compares two different graph designs to see which was the most effective. Results: Our results for the sound graphs showed a significant decrease in subjective workload, reduced time taken to complete tasks and reduced errors as compared to a standard speech interface. For the haptic graphs reductions in workload and some of the problems that can occur when using such graphs are shown. Conclusions: Using non-speech sound and haptics can significantly improve interaction with visualizations such as graphs. This multimodal approach makes the most of the senses our users have to provide access to information in more flexible ways

    Towards Real-Time Haptic Exploration using a Mobile Robot as Mediator

    Get PDF
    ©2010 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.Presented at the 2010 IEEE Haptics Symposium, 25-26 March 2010, Waltham, MA.DOI: 10.1109/HAPTIC.2010.5444643In this paper, we propose a new concept of haptic exploration using a mobile manipulation system, which combines the mobility and manipulability of the robot with haptic feedback for user interaction. The system utilizes and integrates heterogeneous robotic sensor readings to create a real-time spatial model of the environment, which in turn can be conveyed to the user to explore the haptically represented environment and spatially perceive the world without direct contact. The real-world values are transformed into an environmental model (an internal map) by the sensors, and the environmental model is used to create environmental feedback on the haptic device which interacts in the haptically augmented space. Through this multi-scale convergence of dynamic sensor data and haptic interaction, our goal is to enable real-time exploration of the world through remote interfaces without the use of predefined world models. In this paper, the system algorithms and platform are discussed, along with preliminary results to show the capabilities of the system

    A model to design multimedia software for learners with visual disabilities

    Get PDF
    Current interactive multimedia learning software can not be accessed by learners with disabilities. This is the case for students with vision disabilities. Modeling techniques are necessary to map real world experiences to virtual worlds by using 3D auditory representations of objects for blind people. In this paper we present a model to design multimedia software for blind learners. The model was validated with existing educational software systems. We describe the modeling of the real world including cognitive usability testing tasks by considering not only the representation of the real world but also modeling the learner’s knowledge of the virtual world. Finally, we analyze critical issues in designing software for learners with visual disabilities and propose some recommendations and guidelines.Education for the 21 st century - impact of ICT and Digital Resources ConferenceRed de Universidades con Carreras en Informática (RedUNCI

    Multimodal virtual reality versus printed medium in visualization for blind people

    Get PDF

    una propuesta metodológica en torno al uso de la realidad virtual en personas con discapacidad visual

    Get PDF
    UIDB/00417/2020 UIDP/00417/2020Este artículo tiene como objetivo reflexionar sobre las posibilidades que ofrece la realidad virtual como elemento mediador entre el objeto artístico y el público con discapacidad visual. A partir de la digitalización fotogramétrica y tratamiento virtual de la pintura O Grupo do Leão (1885) de Columbano Bordalo Pinheiro (pieza clave del realismo portugués) ponemos el foco en la retroalimentación mecano-háptica como recurso de inclusión cognitiva y socioafectiva el ámbito museístico. El propósito es desarrollar e impulsar un prototipo tecnológico (en curso) que haga efectivo el encuentro estético entre el espectador con discapacidad visual y el objeto de museo del modo más auténtico posible. Finalmente, se especifican algunos resultados ya obtenidos durante una estancia de investigación internacional: procesos de captura y generación del modelo tridimensional, limpieza y tratamiento con software 3D especializado, para contribuir a una nueva exploración de los sentidos dentro del campo de la inmersión visual en museos.publishersversionpublishe
    corecore