3 research outputs found

    Combining Passive Visual Cameras and Active IMU Sensors to Track Cooperative People

    No full text
    We attack the problem of persistently tracking cooperative people such as children, the elderly or patients by combining passive tracking and active tracking techniques. Passive tracking uses visual signals from surveillance cameras, but vision based people tracking becomes a hard problem in challenging scenarios such as long-term/heavy occlusion, people changing their movement patterns during occlusion, or people temporarily moving out of the visual field. Active tracking uses sensor signals from Inertial Measurement Unit (IMU) carried by targets themselves. IMU-based tracking is independent of visual signals, so it keeps working when people are visually occluded and offers clues where the target could be, helping the visual tracking to reidentify the target. Meanwhile, when visual signals on people are available, visual tracking can calibrate IMU-based tracking to avoid sensor drift. The experimental results show that the IMU and visual tracking are complementary to each other and their combination performs robustly on tracking cooperative people in many challenging scenarios

    Fusion of non-visual and visual sensors for human tracking

    Get PDF
    Human tracking is an extensively researched yet still challenging area in the Computer Vision field, with a wide range of applications such as surveillance and healthcare. People may not be successfully tracked with merely the visual information in challenging cases such as long-term occlusion. Thus, we propose to combine information from other sensors with the surveillance cameras to persistently localize and track humans, which is becoming more promising with the pervasiveness of mobile devices such as cellphones, smart watches and smart glasses embedded with all kinds of sensors including accelerometers, gyroscopes, magnetometers, GPS, WiFi modules and so on. In this thesis, we firstly investigate the application of Inertial Measurement Unit (IMU) from mobile devices to human activity recognition and human tracking, we then develop novel persistent human tracking and indoor localization algorithms by the fusion of non-visual sensors and visual sensors, which not only overcomes the occlusion challenge in visual tracking, but also alleviates the calibration and drift problems in IMU tracking --Abstract, page iii

    3D Passive-Vision-Aided Pedestrian Dead Reckoning for Indoor Positioning

    Get PDF
    The vision-aided Pedestrian Dead Reckoning (PDR) systems have become increasingly popular, thanks to the ubiquitous mobile phone embedded with several sensors. This is particularly important for indoor use, where other indoor positioning technologies require additional installation or body-attachment of specific sensors. This paper proposes and develops a novel 3D Passive Vision-aided PDR system that uses multiple surveillance cameras and smartphone-based PDR. The proposed system can continuously track users’ movement on different floors by integrating results of inertial navigation and Faster R-CNN-based real-time pedestrian detection, while utilizing existing camera locations and embedded barometers to provide floor/height information to identify user positions in 3D space. This novel system provides a relatively low-cost and user-friendly solution, which requires no modifications to currently available mobile devices and also the existing indoor infrastructures available at many public buildings for the purpose of 3D indoor positioning. This paper shows the case of testing the prototype in a four-floor building, where it can provide the horizontal accuracy of 0.16m and the vertical accuracy of 0.5m. This level of accuracy is even better than required accuracy targeted by several emergency services, including the Federal Communications Commission (FCC). This system is developed for both Android and iOS-running devices
    corecore