7 research outputs found

    An indoor navigation architecture using variable data sources for blind and visually impaired persons

    Get PDF
    Contrary to outdoor positioning and navigation systems, there isn’t a counterpart global solution for indoor environments. Usually, the deployment of an indoor positioning system must be adapted case by case, according to the infrastructure and the objective of the localization. A particularly delicate case is related with persons who are blind or visually impaired. A robust and easy to use indoor navigation solution would be extremely useful, but this would also be particularly difficult to develop, given the special requirements of the system that would have to be more accurate and user friendly than a general solution. This paper presents a contribute to this subject, by proposing a hybrid indoor positioning system adaptable to the surrounding indoor structure, and dealing with different types of signals to increase accuracy. This would permit lower the deployment costs, since it could be done gradually, beginning with the likely existing Wi-Fi infrastructure to get a fairy accuracy up to a high accuracy using visual tags and NFC tags when necessary and possible.info:eu-repo/semantics/publishedVersio

    Mental maps and the use of sensory information by blind and partially sighted people

    Get PDF
    This article aims to fill an important gap in the literature by reporting on blind and partially sighted people's use of spatial representations (mental maps) from their perspective and when travelling on real routes. The results presented here were obtained from semi-structured interviews with 100 blind and partially sighted people in five different countries. They are intended to answer three questions about the representation of space by blind and partially sighted people, how these representations are used to support travel, and the implications for the design of travel aids and orientation and mobility training. They show that blind and partially sighted people do have spatial representations and that a number of them explicitly use the term mental map. This article discusses the variety of approaches to spatial representations, including the sensory modalities used, the use of global or local representations, and the applications to support travel. The conclusions summarize the answers to the three questions and include a two-level preliminary classification of the spatial representations of blind and partially sighted people

    Context-aware obstacle detection for navigation by visually impaired

    No full text
    This paper presents a context-aware smartphone-based based visual obstacle detection approach to aid visually impaired people in navigating indoor environments. The approach is based on processing two consecutive frames (images), computing optical flow, and tracking certain points to detect obstacles. The frame rate of the video stream is determined using a context-aware data fusion technique for the sensors on smartphones. Through an efficient and novel algorithm, a point dataset on each consecutive frames is designed and evaluated to check whether the points belong to an obstacle. In addition to determining the points based on the texture in each frame, our algorithm also considers the heading of user movement to find critical areas on the image plane. We validated the algorithm through experiments by comparing it against two comparable algorithms. The experiments were conducted in different indoor settings and the results based on precision, recall, accuracy, and f-measure were compared and analyzed. The results show that, in comparison to the other two widely used algorithms for this process, our algorithm is more precise. We also considered time-to-contact parameter for clustering the points and presented the improvement of the performance of clustering by using this parameter

    Visual-Inertial Sensor Fusion Models and Algorithms for Context-Aware Indoor Navigation

    Get PDF
    Positioning in navigation systems is predominantly performed by Global Navigation Satellite Systems (GNSSs). However, while GNSS-enabled devices have become commonplace for outdoor navigation, their use for indoor navigation is hindered due to GNSS signal degradation or blockage. For this, development of alternative positioning approaches and techniques for navigation systems is an ongoing research topic. In this dissertation, I present a new approach and address three major navigational problems: indoor positioning, obstacle detection, and keyframe detection. The proposed approach utilizes inertial and visual sensors available on smartphones and are focused on developing: a framework for monocular visual internal odometry (VIO) to position human/object using sensor fusion and deep learning in tandem; an unsupervised algorithm to detect obstacles using sequence of visual data; and a supervised context-aware keyframe detection. The underlying technique for monocular VIO is a recurrent convolutional neural network for computing six-degree-of-freedom (6DoF) in an end-to-end fashion and an extended Kalman filter module for fine-tuning the scale parameter based on inertial observations and managing errors. I compare the results of my featureless technique with the results of conventional feature-based VIO techniques and manually-scaled results. The comparison results show that while the framework is more effective compared to featureless method and that the accuracy is improved, the accuracy of feature-based method still outperforms the proposed approach. The approach for obstacle detection is based on processing two consecutive images to detect obstacles. Conducting experiments and comparing the results of my approach with the results of two other widely used algorithms show that my algorithm performs better; 82% precision compared with 69%. In order to determine the decent frame-rate extraction from video stream, I analyzed movement patterns of camera and inferred the context of the user to generate a model associating movement anomaly with proper frames-rate extraction. The output of this model was utilized for determining the rate of keyframe extraction in visual odometry (VO). I defined and computed the effective frames for VO and experimented with and used this approach for context-aware keyframe detection. The results show that the number of frames, using inertial data to infer the decent frames, is decreased
    corecore