3 research outputs found

    Indoor pedestrian dead reckoning calibration by visual tracking and map information

    Get PDF
    Currently, Pedestrian Dead Reckoning (PDR) systems are becoming more attractive in market of indoor positioning. This is mainly due to the development of cheap and light Micro Electro-Mechanical Systems (MEMS) on smartphones and less requirement of additional infrastructures in indoor areas. However, it still faces the problem of drift accumulation and needs the support from external positioning systems. Vision-aided inertial navigation, as one possible solution to that problem, has become very popular in indoor localization with satisfied performance than individual PDR system. In the literature however, previous studies use fixed platform and the visual tracking uses feature-extraction-based methods. This paper instead contributes a distributed implementation of positioning system and uses deep learning for visual tracking. Meanwhile, as both inertial navigation and optical system can only provide relative positioning information, this paper contributes a method to integrate digital map with real geographical coordinates to supply absolute location. This hybrid system has been tested on two common operation systems of smartphones as iOS and Android, based on corresponded data collection apps respectively, in order to test the robustness of method. It also uses two different ways for calibration, by time synchronization of positions and heading calibration based on time steps. According to the results, localization information collected from both operation systems has been significantly improved after integrating with visual tracking data

    Vision-Aided Indoor Pedestrian Dead Reckoning

    Get PDF
    Vision-aided inertial navigation has become a more popular method for indoor positioning recently. This popularity is basically due to the development of light-weighted and low-cost Micro Electro-Mechanical Systems (MEMS) as well as advancement and availability of CCD cameras in public indoor area. While the use of inertial sensors and cameras are limited to the challenge of drift accumulation and object detection in line of sight, respectively, the integration of these two sensors can compensate their drawbacks and provide more accurate positioning solutions. This study builds up upon earlier research on “Vision-Aided Indoor Pedestrian Tracking System”, to address challenges of indoor positioning by providing more accurate and seamless solutions. The study improves the overall design and implementation of inertial sensor fusion for indoor applications. In this regard, genuine indoor maps and geographical information, i.e. digitized floor plans, are used for visual tracking application the pilot study. Both of inertial positioning and visual tracking components can work stand-alone with additional location information from the maps. In addition, while the visual tracking component can help to calibrate pedestrian dead reckoning and provides better accuracy, inertial sensing module can alternatively be used for positioning and tracking when the user cannot be detected by the camera until being detected in video again. The mean accuracy of this positioning system is 10.98% higher than uncalibrated inertial positioning during experiment
    corecore