200 research outputs found

    Motion Tracklet Oriented 6-DoF Inertial Tracking Using Commodity Smartphones

    Get PDF

    Finding Your Way Back: Comparing Path Odometry Algorithms for Assisted Return.

    Get PDF
    We present a comparative analysis of inertial-based odometry algorithms for the purpose of assisted return. An assisted return system facilitates backtracking of a path previously taken, and can be particularly useful for blind pedestrians. We present a new algorithm for path matching, and test it in simulated assisted return tasks with data from WeAllWalk, the only existing data set with inertial data recorded from blind walkers. We consider two odometry systems, one based on deep learning (RoNIN), and the second based on robust turn detection and step counting. Our results show that the best path matching results are obtained using the turns/steps odometry system

    IONet: Learning to Cure the Curse of Drift in Inertial Odometry

    Full text link
    Inertial sensors play a pivotal role in indoor localization, which in turn lays the foundation for pervasive personal applications. However, low-cost inertial sensors, as commonly found in smartphones, are plagued by bias and noise, which leads to unbounded growth in error when accelerations are double integrated to obtain displacement. Small errors in state estimation propagate to make odometry virtually unusable in a matter of seconds. We propose to break the cycle of continuous integration, and instead segment inertial data into independent windows. The challenge becomes estimating the latent states of each window, such as velocity and orientation, as these are not directly observable from sensor data. We demonstrate how to formulate this as an optimization problem, and show how deep recurrent neural networks can yield highly accurate trajectories, outperforming state-of-the-art shallow techniques, on a wide range of tests and attachments. In particular, we demonstrate that IONet can generalize to estimate odometry for non-periodic motion, such as a shopping trolley or baby-stroller, an extremely challenging task for existing techniques.Comment: To appear in AAAI18 (Oral

    Benchmarking Visual-Inertial Deep Multimodal Fusion for Relative Pose Regression and Odometry-aided Absolute Pose Regression

    Full text link
    Visual-inertial localization is a key problem in computer vision and robotics applications such as virtual reality, self-driving cars, and aerial vehicles. The goal is to estimate an accurate pose of an object when either the environment or the dynamics are known. Recent methods directly regress the pose using convolutional and spatio-temporal networks. Absolute pose regression (APR) techniques predict the absolute camera pose from an image input in a known scene. Odometry methods perform relative pose regression (RPR) that predicts the relative pose from a known object dynamic (visual or inertial inputs). The localization task can be improved by retrieving information of both data sources for a cross-modal setup, which is a challenging problem due to contradictory tasks. In this work, we conduct a benchmark to evaluate deep multimodal fusion based on PGO and attention networks. Auxiliary and Bayesian learning are integrated for the APR task. We show accuracy improvements for the RPR-aided APR task and for the RPR-RPR task for aerial vehicles and hand-held devices. We conduct experiments on the EuRoC MAV and PennCOSYVIO datasets, and record a novel industry dataset.Comment: Under revie
    • …
    corecore