16 research outputs found

    Vision-Aided Indoor Pedestrian Dead Reckoning

    Get PDF
    Vision-aided inertial navigation has become a more popular method for indoor positioning recently. This popularity is basically due to the development of light-weighted and low-cost Micro Electro-Mechanical Systems (MEMS) as well as advancement and availability of CCD cameras in public indoor area. While the use of inertial sensors and cameras are limited to the challenge of drift accumulation and object detection in line of sight, respectively, the integration of these two sensors can compensate their drawbacks and provide more accurate positioning solutions. This study builds up upon earlier research on “Vision-Aided Indoor Pedestrian Tracking System”, to address challenges of indoor positioning by providing more accurate and seamless solutions. The study improves the overall design and implementation of inertial sensor fusion for indoor applications. In this regard, genuine indoor maps and geographical information, i.e. digitized floor plans, are used for visual tracking application the pilot study. Both of inertial positioning and visual tracking components can work stand-alone with additional location information from the maps. In addition, while the visual tracking component can help to calibrate pedestrian dead reckoning and provides better accuracy, inertial sensing module can alternatively be used for positioning and tracking when the user cannot be detected by the camera until being detected in video again. The mean accuracy of this positioning system is 10.98% higher than uncalibrated inertial positioning during experiment

    Visual Odometry by Multi-frame Feature Integration

    Get PDF
    This paper presents a novel stereo-based visual odometry approach that provides state-of-the-art results in real time, both indoors and outdoors. Our proposed method follows the procedure of computing optical flow and stereo disparity to minimize the re-projection error of tracked feature points. However, instead of following the traditional approach of performing this task using only consecutive frames, we propose a novel and computationally inexpensive technique that uses the whole history of the tracked feature points to compute the motion of the camera. In our technique, which we call multi-frame feature integration, the features measured and tracked over all past frames are integrated into a single, improved estimate. An augmented feature set, composed of the improved estimates, is added to the optimization algorithm, improving the accuracy of the computed motion and reducing ego-motion drift. Experimental results show that the proposed approach reduces pose error by up to 65 % with a negligible additional computational cost of 3.8%. Furthermore, our algorithm outperforms all other known methods on the KITTI Vision Benchmark data set. 1

    Indoor pedestrian dead reckoning calibration by visual tracking and map information

    Get PDF
    Currently, Pedestrian Dead Reckoning (PDR) systems are becoming more attractive in market of indoor positioning. This is mainly due to the development of cheap and light Micro Electro-Mechanical Systems (MEMS) on smartphones and less requirement of additional infrastructures in indoor areas. However, it still faces the problem of drift accumulation and needs the support from external positioning systems. Vision-aided inertial navigation, as one possible solution to that problem, has become very popular in indoor localization with satisfied performance than individual PDR system. In the literature however, previous studies use fixed platform and the visual tracking uses feature-extraction-based methods. This paper instead contributes a distributed implementation of positioning system and uses deep learning for visual tracking. Meanwhile, as both inertial navigation and optical system can only provide relative positioning information, this paper contributes a method to integrate digital map with real geographical coordinates to supply absolute location. This hybrid system has been tested on two common operation systems of smartphones as iOS and Android, based on corresponded data collection apps respectively, in order to test the robustness of method. It also uses two different ways for calibration, by time synchronization of positions and heading calibration based on time steps. According to the results, localization information collected from both operation systems has been significantly improved after integrating with visual tracking data

    Mobile graphics: SIGGRAPH Asia 2017 course

    Get PDF
    Peer ReviewedPostprint (published version

    3D Passive-Vision-Aided Pedestrian Dead Reckoning for Indoor Positioning

    Get PDF
    The vision-aided Pedestrian Dead Reckoning (PDR) systems have become increasingly popular, thanks to the ubiquitous mobile phone embedded with several sensors. This is particularly important for indoor use, where other indoor positioning technologies require additional installation or body-attachment of specific sensors. This paper proposes and develops a novel 3D Passive Vision-aided PDR system that uses multiple surveillance cameras and smartphone-based PDR. The proposed system can continuously track users’ movement on different floors by integrating results of inertial navigation and Faster R-CNN-based real-time pedestrian detection, while utilizing existing camera locations and embedded barometers to provide floor/height information to identify user positions in 3D space. This novel system provides a relatively low-cost and user-friendly solution, which requires no modifications to currently available mobile devices and also the existing indoor infrastructures available at many public buildings for the purpose of 3D indoor positioning. This paper shows the case of testing the prototype in a four-floor building, where it can provide the horizontal accuracy of 0.16m and the vertical accuracy of 0.5m. This level of accuracy is even better than required accuracy targeted by several emergency services, including the Federal Communications Commission (FCC). This system is developed for both Android and iOS-running devices

    Sistemas de navegación en plataformas móviles mediante odometría visual

    Get PDF
    En este documento se presenta un modo de detectar y rastrear características en secuencias de imágenes (vídeos), grabadas por una cámara monocular móvil (localizada en una plataforma móvil). El objetivo último de estas detecciones es el de estimar la trayectoria recorrida por la cámara. También se presentan los cálculos matemáticos detrás de esta estimación. Se desarrolló un programa principal como parte del proyecto, capaz de rastrear puntos a lo largo de las secuencias, registrando sus trayectorias. Este programa realizado no estima la trayectoria de la cámara. También se mencionan varios métodos, usados por distintos autores, de interés para la odometría.This document introduces a way to detect and track features in image sequences (videos), recorded by a mobile monocular camera (placed on a mobile platform). The nal scope of those detections is to estimate the trajectory of the camera. The mathematical calculations behind cameras' trajectory estimation are also presented. One main program was created as part of this project, able to track some points throughout the sequences, registering their trajectories. This developed program does not estimate the camera's trajectory. Several methods, used by di erent authors, of special interest for the odometry are also mentioned.Grado en Sistemas de Telecomunicació

    Automatic Dense 3D Scene Mapping from Non-overlapping Passive Visual Sensors for Future Autonomous Systems

    Get PDF
    The ever increasing demand for higher levels of autonomy for robots and vehicles means there is an ever greater need for such systems to be aware of their surroundings. Whilst solutions already exist for creating 3D scene maps, many are based on active scanning devices such as laser scanners and depth cameras that are either expensive, unwieldy, or do not function well under certain environmental conditions. As a result passive cameras are a favoured sensor due their low cost, small size, and ability to work in a range of lighting conditions. In this work we address some of the remaining research challenges within the problem of 3D mapping around a moving platform. We utilise prior work in dense stereo imaging, Stereo Visual Odometry (SVO) and extend Structure from Motion (SfM) to create a pipeline optimised for on vehicle sensing. Using forward facing stereo cameras, we use state of the art SVO and dense stereo techniques to map the scene in front of the vehicle. With significant amounts of prior research in dense stereo, we addressed the issue of selecting an appropriate method by creating a novel evaluation technique. Visual 3D mapping of dynamic scenes from a moving platform result in duplicated scene objects. We extend the prior work on mapping by introducing a generalized dynamic object removal process. Unlike other approaches that rely on computationally expensive segmentation or detection, our method utilises existing data from the mapping stage and the findings from our dense stereo evaluation. We introduce a new SfM approach that exploits our platform motion to create a novel dense mapping process that exceeds the 3D data generation rate of state of the art alternatives. Finally, we combine dense stereo, SVO, and our SfM approach to automatically align point clouds from non-overlapping views to create a rotational and scale consistent global 3D model

    Doctor of Philosophy

    Get PDF
    dissertationThe need for position and orientation information in a wide variety of applications has led to the development of equally varied methods for providing it. Amongst the alternatives, inertial navigation is a solution that o ffers self-contained operation and provides angular rate, orientation, acceleration, velocity, and position information. Until recently, the size, cost, and weight of inertial sensors has limited their use to vehicles with relatively large payload capacities and instrumentation budgets. However, the development of microelectromechanical system (MEMS) inertial sensors now o ers the possibility of using inertial measurement in smaller, even human-scale, applications. Though much progress has been made toward this goal, there are still many obstacles. While operating independently from any outside reference, inertial measurement su ers from unbounded errors that grow at rates up to cubic in time. Since the reduced size and cost of these new miniaturized sensors comes at the expense of accuracy and stability, the problem of error accumulation becomes more acute. Nevertheless, researchers have demonstrated that useful results can be obtained in real-world applications. The research presented herein provides several contributions to the development of human-scale inertial navigation. A calibration technique allowing complex sensor models to be identified using inexpensive hardware and linear solution techniques has been developed. This is shown to provide significant improvements in the accuracy of the calibrated outputs from MEMS inertial sensors. Error correction algorithms based on easily identifiable characteristics of the sensor outputs have also been developed. These are demonstrated in both one- and three-dimensional navigation. The results show significant improvements in the levels of accuracy that can be obtained using these inexpensive sensors. The algorithms also eliminate empirical, application-specific simplifications and heuristics, upon which many existing techniques have depended, and make inertial navigation a more viable solution for tracking the motion around us

    Optimal Image-Aided Inertial Navigation

    Get PDF
    The utilization of cameras in integrated navigation systems is among the most recent scientific research and high-tech industry development. The research is motivated by the requirement of calibrating off-the-shelf cameras and the fusion of imaging and inertial sensors in poor GNSS environments. The three major contributions of this dissertation are The development of a structureless camera auto-calibration and system calibration algorithm for a GNSS, IMU and stereo camera system. The auto-calibration bundle adjustment utilizes the scale restraint equation, which is free of object coordinates. The number of parameters to be estimated is significantly reduced in comparison with the ones in a self-calibrating bundle adjustment based on the collinearity equations. Therefore, the proposed method is computationally more efficient. The development of a loosely-coupled visual odometry aided inertial navigation algorithm. The fusion of the two sensors is usually performed using a Kalman filter. The pose changes are pairwise time-correlated, i.e. the measurement noise vector at the current epoch is only correlated with the one from the previous epoch. Time-correlated errors are usually modelled by a shaping filter. The shaping filter developed in this dissertation uses Cholesky factors as coefficients derived from the variance and covariance matrices of the measurement noise vectors. Test results with showed that the proposed algorithm performs better than the existing ones and provides more realistic covariance estimates. The development of a tightly-coupled stereo multi-frame aided inertial navigation algorithm for reducing position and orientation drifts. Usually, the image aiding based on the visual odometry uses the tracked features only from a pair of the consecutive image frames. The proposed method integrates the features tracked from multiple overlapped image frames for reducing the position and orientation drifts. The measurement equation is derived from SLAM measurement equation system where the landmark positions in SLAM are algebraically by time-differencing. However, the derived measurements are time-correlated. Through a sequential de-correlation, the Kalman filter measurement update can be performed sequentially and optimally. The main advantages of the proposed algorithm are the reduction of computational requirements when compared to SLAM and a seamless integration into an existing GNSS aided-IMU system
    corecore