404 research outputs found

    Portable Robotic Navigation Aid for the Visually Impaired

    Get PDF
    This dissertation aims to address the limitations of existing visual-inertial (VI) SLAM methods - lack of needed robustness and accuracy - for assistive navigation in a large indoor space. Several improvements are made to existing SLAM technology, and the improved methods are used to enable two robotic assistive devices, a robot cane, and a robotic object manipulation aid, for the visually impaired for assistive wayfinding and object detection/grasping. First, depth measurements are incorporated into the optimization process for device pose estimation to improve the success rate of VI SLAM\u27s initialization and reduce scale drift. The improved method, called depth-enhanced visual-inertial odometry (DVIO), initializes itself immediately as the environment\u27s metric scale can be derived from the depth data. Second, a hybrid PnP (perspective n-point) method is introduced for a more accurate estimation of the pose change between two camera frames by using the 3D data from both frames. Third, to implement DVIO on a smartphone with variable camera intrinsic parameters (CIP), a method called CIP-VMobile is devised to simultaneously estimate the intrinsic parameters and motion states of the camera. CIP-VMobile estimates in real time the CIP, which varies with the smartphone\u27s pose due to the camera\u27s optical image stabilization mechanism, resulting in more accurate device pose estimates. Various experiments are performed to validate the VI-SLAM methods with the two robotic assistive devices. Beyond these primary objectives, SM-SLAM is proposed as a potential extension for the existing SLAM methods in dynamic environments. This forward-looking exploration is premised on the potential that incorporating dynamic object detection capabilities in the front-end could improve SLAM\u27s overall accuracy and robustness. Various experiments have been conducted to validate the efficacy of this newly proposed method, using both public and self-collected datasets. The results obtained substantiate the viability of this innovation, leaving a deeper investigation for future work

    DRAGON: A Dialogue-Based Robot for Assistive Navigation with Visual Language Grounding

    Full text link
    Persons with visual impairments (PwVI) have difficulties understanding and navigating spaces around them. Current wayfinding technologies either focus solely on navigation or provide limited communication about the environment. Motivated by recent advances in visual-language grounding and semantic navigation, we propose DRAGON, a guiding robot powered by a dialogue system and the ability to associate the environment with natural language. By understanding the commands from the user, DRAGON is able to guide the user to the desired landmarks on the map, describe the environment, and answer questions from visual observations. Through effective utilization of dialogue, the robot can ground the user's free-form descriptions to landmarks in the environment, and give the user semantic information through spoken language. We conduct a user study with blindfolded participants in an everyday indoor environment. Our results demonstrate that DRAGON is able to communicate with the user smoothly, provide a good guiding experience, and connect users with their surrounding environment in an intuitive manner.Comment: Webpage and videos are at https://sites.google.com/view/dragon-wayfinding/hom

    Map data representation for indoor navigation - a design framework towards a construction of indoor map

    No full text
    A map is a basic component used in a part of navigation in everyday life, which helps people to find information regarding locations, landmarks, and routes. By GPS and online service map e.g. Google maps, navigating outdoors is easier. Inside buildings, however, navigating would not be so easy due to natural characteristics and limitations of GPS, which has led to the creations of indoor navigation system. Even though the indoor navigation systems have been developed for long time, there are still some limitation in accuracy, reliability and indoor spatial information. Navigating inside without indoor spatial information would be a challenge for the users. Regarding the indoor spatial information, a research question has been drawn on finding an appropriate framework towards map data representation of an indoor public spaces and buildings in order to promote indoor navigation for people, robotics, and autonomous systems. This paper has purposed a list of factors and components used towards the design framework for map data representation of indoor public spaces and buildings. The framework, in this paper, has been presented as a form of a multiple-layered model, which each layer designed for a different propose, with object and information classifications

    Haptic directional information for spatial exploration

    Get PDF
    This paper investigates the efficacy of a tactile and haptic human robot interface developed and trialled to aid navigation in poor visibility and audibility conditions, which occur, for example, in search and rescue. The new developed interface generates haptic directional information that will support human navigation when other senses are not or only partially accessible. The central question of this paper was whether humans are able to interpret haptic signals as denoting different spatial directions. The effectiveness of the haptic signals was measured in a novel experimental set up. Participants were given a stick (replicating the robot interface) and asked to reproduce the specific spatial information denoted by each of the haptic signals. The task performance was examined quantitatively and results show that the haptic signals can denote distinguishable spatial directions, supporting the hypothesis that tactile and haptic information can be effectively used to aid human navigation

    SLAM for Visually Impaired People: A Survey

    Full text link
    In recent decades, several assistive technologies for visually impaired and blind (VIB) people have been developed to improve their ability to navigate independently and safely. At the same time, simultaneous localization and mapping (SLAM) techniques have become sufficiently robust and efficient to be adopted in the development of assistive technologies. In this paper, we first report the results of an anonymous survey conducted with VIB people to understand their experience and needs; we focus on digital assistive technologies that help them with indoor and outdoor navigation. Then, we present a literature review of assistive technologies based on SLAM. We discuss proposed approaches and indicate their pros and cons. We conclude by presenting future opportunities and challenges in this domain.Comment: 26 pages, 5 tables, 3 figure

    The Application of Geographic Information Systems to Support Wayfinding for People with Visual Impairments or Blindness

    Get PDF
    People with visual impairments or legal blindness are relying on differing, comprehensive information utilized for their individual mobility. Increasing the personal mobility of people with disabilities and thereby achieving a self-determined life are major steps toward a more inclusive society. Research and applications on mobility issues of people with visual impairments or blindness mainly focus on technical applications or assistive orientation and navigation devices, and less work is covering the individual needs, e.g., regarding the information required for wayfinding. Moreover, active participation of people with disabilities in research and development is still limited. ways2see offers a new online application to support individual mobility in context of pre-trip planning for people with visual impairments or blindness based on a Geographic Information System (GIS). Obstacles, barriers, landmarks, orientation hints, and directions for wayfinding are generated by user profiles. The underlying network for GIS analysis is designed as pedestrian network. This individually coded network approach integrates sidewalks and different types of crossings and implements various orientation and navigation attributes. ways2see integrates three research realms: firstly, implementing a participative and transdisciplinary research design; secondly, integrating personalized information aligned with the individual user needs; and thirdly, presenting result of GIS analysis through an accessible designed user interface
    • …
    corecore