48 research outputs found

    Indoor/outdoor navigation system based on possibilistic traversable area segmentation for visually impaired people

    Get PDF
    Autonomous collision avoidance for visually impaired people requires a specific processing for an accurate definition of traversable area. Processing of a real time image sequence for traversable area segmentation is quite mandatory. Low cost systems suggest use of poor quality cameras. However, real time low cost camera suffers from great variability of traversable area appearance at indoor as well as outdoor environments. Taking into account ambiguity affecting object and traversable area appearance induced by reflections, illumination variations, occlusions (, etc...), an accurate segmentation of traversable area in such conditions remains a challenge. Moreover, indoor and outdoor environments add additional variability to traversable areas. In this paper, we present a real-time approach for fast traversable area segmentation from image sequence recorded by a low-cost monocular camera for navigation system. Taking into account all kinds of variability in the image, we apply possibility theory for modeling information ambiguity. An efficient way of updating the traversable area model in each environment condition is to consider traversable area samples from the same processed image for building its possibility maps. Then fusing these maps allows making a fair model definition of the traversable area. Performance of the proposed system was evaluated on public databases, with indoor and outdoor environments. Experimental results show that this method is challenging leading to higher segmentation rates

    State of the art review on walking support system for visually impaired people

    Get PDF
    The technology for terrain detection and walking support system for blind people has rapidly been improved the last couple of decades but to assist visually impaired people may have started long ago. Currently, a variety of portable or wearable navigation system is available in the market to help the blind for navigating their way in his local or remote area. The focused category in this work can be subgroups as electronic travel aids (ETAs), electronic orientation aids (EOAs) and position locator devices (PLDs). However, we will focus mainly on electronic travel aids (ETAs). This paper presents a comparative survey among the various portable or wearable walking support systems as well as informative description (a subcategory of ETAs or early stages of ETAs) with its working principal advantages and disadvantages so that the researchers can easily get the current stage of assisting blind technology along with the requirement for optimising the design of walking support system for its users

    Portable Robotic Navigation Aid for the Visually Impaired

    Get PDF
    This dissertation aims to address the limitations of existing visual-inertial (VI) SLAM methods - lack of needed robustness and accuracy - for assistive navigation in a large indoor space. Several improvements are made to existing SLAM technology, and the improved methods are used to enable two robotic assistive devices, a robot cane, and a robotic object manipulation aid, for the visually impaired for assistive wayfinding and object detection/grasping. First, depth measurements are incorporated into the optimization process for device pose estimation to improve the success rate of VI SLAM\u27s initialization and reduce scale drift. The improved method, called depth-enhanced visual-inertial odometry (DVIO), initializes itself immediately as the environment\u27s metric scale can be derived from the depth data. Second, a hybrid PnP (perspective n-point) method is introduced for a more accurate estimation of the pose change between two camera frames by using the 3D data from both frames. Third, to implement DVIO on a smartphone with variable camera intrinsic parameters (CIP), a method called CIP-VMobile is devised to simultaneously estimate the intrinsic parameters and motion states of the camera. CIP-VMobile estimates in real time the CIP, which varies with the smartphone\u27s pose due to the camera\u27s optical image stabilization mechanism, resulting in more accurate device pose estimates. Various experiments are performed to validate the VI-SLAM methods with the two robotic assistive devices. Beyond these primary objectives, SM-SLAM is proposed as a potential extension for the existing SLAM methods in dynamic environments. This forward-looking exploration is premised on the potential that incorporating dynamic object detection capabilities in the front-end could improve SLAM\u27s overall accuracy and robustness. Various experiments have been conducted to validate the efficacy of this newly proposed method, using both public and self-collected datasets. The results obtained substantiate the viability of this innovation, leaving a deeper investigation for future work

    Non-Linearity Analysis of Depth and Angular Indexes for Optimal Stereo SLAM

    Get PDF
    In this article, we present a real-time 6DoF egomotion estimation system for indoor environments using a wide-angle stereo camera as the only sensor. The stereo camera is carried in hand by a person walking at normal walking speeds 3–5 km/h. We present the basis for a vision-based system that would assist the navigation of the visually impaired by either providing information about their current position and orientation or guiding them to their destination through different sensing modalities. Our sensor combines two different types of feature parametrization: inverse depth and 3D in order to provide orientation and depth information at the same time. Natural landmarks are extracted from the image and are stored as 3D or inverse depth points, depending on a depth threshold. This depth threshold is used for switching between both parametrizations and it is computed by means of a non-linearity analysis of the stereo sensor. Main steps of our system approach are presented as well as an analysis about the optimal way to calculate the depth threshold. At the moment each landmark is initialized, the normal of the patch surface is computed using the information of the stereo pair. In order to improve long-term tracking, a patch warping is done considering the normal vector information. Some experimental results under indoor environments and conclusions are presented

    Wearable obstacle avoidance electronic travel aids for blind and visually impaired individuals : a systematic review

    Get PDF
    Background Wearable obstacle avoidance electronic travel aids (ETAs) have been developed to assist the safe displacement of blind and visually impaired individuals (BVIs) in indoor/outdoor spaces. This systematic review aimed to understand the strengths and weaknesses of existing ETAs in terms of hardware functionality, cost, and user experience. These elements may influence the usability of the ETAs and are valuable in guiding the development of superior ETAs in the future. Methods Formally published studies designing and developing the wearable obstacle avoidance ETAs were searched for from six databases from their inception to April 2023. The PRISMA 2020 and APISSER guidelines were followed. Results Eighty-nine studies were included for analysis, 41 of which were judged to be of moderate to high quality. Most wearable obstacle avoidance ETAs mainly depend on camera- and ultrasonic-based techniques to achieve perception of the environment. Acoustic feedback was the most common human-computer feedback form used by the ETAs. According to user experience, the efficacy and safety of the device was usually their primary concern. Conclusions Although many conceptualised ETAs have been designed to facilitate BVIs' independent navigation, most of these devices suffer from shortcomings. This is due to the nature and limitations of the various processors, environment detection techniques and human-computer feedback those ETAs are equipped with. Integrating multiple techniques and hardware into one ETA is a way to improve performance, but there is still a need to address the discomfort of wearing the device and the high-cost. Developing an applicable systematic review guideline along with a credible quality assessment tool for these types of studies is also required. © 2013 IEEE

    Ayuda técnica para la autonomía en el desplazamiento

    Get PDF
    The project developed in this thesis involves the design, implementation and evaluation of a new technical assistance aiming to ease the mobility of people with visual impairments. By using processing and sounds synthesis, the users can hear the sonification protocol (through bone conduction) informing them, after training, about the position and distance of the various obstacles that may be on their way, avoiding eventual accidents. In this project, surveys were conducted with experts in the field of rehabilitation, blindness and techniques of image processing and sound, which defined the user requirements that served as guideline for the design. The thesis consists of three self-contained blocks: (i) image processing, where 4 processing algorithms are proposed for stereo vision, (ii) sonification, which details the proposed sound transformation of visual information, and (iii) a final central chapter on integrating the above and sequentially evaluated in two versions or implementation modes (software and hardware). Both versions have been tested with both sighted and blind participants, obtaining qualitative and quantitative results, which define future improvements to the project. ---------------------------------------------------------------------------------------------------------------------------------------------El proyecto desarrollado en la presente tesis doctoral consiste en el diseño, implementación y evaluación de una nueva ayuda técnica orientada a facilitar la movilidad de personas con discapacidad visual. El sistema propuesto consiste en un procesador de estereovisión y un sintetizador de sonidos, mediante los cuales, las usuarias y los usuarios pueden escuchar un código de sonidos mediante transmisión ósea que les informa, previo entrenamiento, de la posición y distancia de los distintos obstáculos que pueda haber en su camino, evitando accidentes. En dicho proyecto, se han realizado encuestas a expertos en el campo de la rehabilitación, la ceguera y en las técnicas y tecnologías de procesado de imagen y sonido, mediante las cuales se definieron unos requisitos de usuario que sirvieron como guía de propuesta y diseño. La tesis está compuesta de tres grandes bloques autocontenidos: (i) procesado de imagen, donde se proponen 4 algoritmos de procesado de visión estéreo, (ii) sonificación, en el cual se detalla la propuesta de transformación a sonido de la información visual, y (iii) un último capítulo central sobre integración de todo lo anterior en dos versiones evaluadas secuencialmente, una software y otra hardware. Ambas versiones han sido evaluadas con usuarios tanto videntes como invidentes, obteniendo resultados cualitativos y cuantitativos que permiten definir mejoras futuras sobre el proyecto finalmente implementado

    Design, modeling and analysis of object localization through acoustical signals for cognitive electronic travel aid for blind people

    Full text link
    El objetivo de la tesis consiste en el estudio y análisis de la localización de objetos en el entorno real mediante sonidos, así como la posterior integración y ensayo de un dispositivo real basado en tal técnica y destinado a personas con discapacidad visual. Con el propósito de poder comprender y analizar la localización de objetos se ha realizado un profundo estado de arte sobre los Sistemas de Navegación desarrollados durante las últimas décadas y orientados a personas con distintos grados de discapacidad visual. En el citado estado del arte, se han analizado y estructurado los dispositivos de navegación existentes, clasificándolos de acuerdo con los componentes de adquisición de datos del entorno utilizados. A este respecto, hay que señalar que, hasta el momento, se conocen tres clases de dispositivos de navegación: 'detectores de obstáculos', que se basan en dispositivos de ultrasonidos y sensores instalados en los dispositivos electrónicos de navegación con el objetivo de detectar los objetos que aparecen en el área de trabajo del sistema; 'sensores del entorno' - que tienen como objetivo la detección del objeto y del usuario. Esta clase de dispositivos se instalan en las estaciones de autobús, metro, tren, pasos de peatones etc., de forma que cuando el sensor del usuario penetra en el área de alcance de los sensores instalados en la estación, éstos informan al usuario sobre la presencia de la misma. Asimismo, el sensor del usuario detecta también los medios de transporte que tienen instalado el correspondiente dispositivo basado en láser o ultrasonidos, ofreciendo al usuario información relativa a número de autobús, ruta etc La tercera clase de sistemas electrónicos de navegación son los 'dispositivos de navegación'. Estos elementos se basan en dispositivos GPS, indicando al usuario tanto su locación, como la ruta que debe seguir para llegar a su punto de destino. Tras la primera etapa de elaboración del estaDunai ., L. (2010). Design, modeling and analysis of object localization through acoustical signals for cognitive electronic travel aid for blind people [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/8441Palanci

    Compact Environment Modelling from Unconstrained Camera Platforms

    Get PDF
    Mobile robotic systems need to perceive their surroundings in order to act independently. In this work a perception framework is developed which interprets the data of a binocular camera in order to transform it into a compact, expressive model of the environment. This model enables a mobile system to move in a targeted way and interact with its surroundings. It is shown how the developed methods also provide a solid basis for technical assistive aids for visually impaired people
    corecore