2,689 research outputs found

    Joint Visual and Wireless Tracking System

    Get PDF
    Object tracking is an important component in many applications including surveillance, manufacturing, inventory tracking, etc. The most common approach is to combine a surveillance camera with an appearance-based visual tracking algorithm. While this approach can provide high tracking accuracy, the tracker can easily diverge in environments where there are much occlusions. In recent years, wireless tracking systems based on different frequency ranges are becoming more popular. While systems using ultra-wideband frequencies suffer similar problems as visual systems, there are systems that use frequencies as low as in those in the AM band to circumvent the problems of obstacles, and exploit the near-field properties between the electric and magnetic waves to achieve tracking accuracy down to about one meter. In this dissertation, I study the combination of a visual tracker and a low-frequency wireless tracker to improve visual tracking in highly occluded area. The proposed system utilizes two homographies formed between the world coordinates with the image coordinates of the head and the foot of the target person. Using the world coordinate system, the proposed system combines a visual tracker and a wireless tracker in an Extended Kalman Filter framework for joint tracking. Extensive experiments have been conducted using both simulations and real videos to demonstrate the validity of our proposed scheme

    Evaluating indoor positioning systems in a shopping mall : the lessons learned from the IPIN 2018 competition

    Get PDF
    The Indoor Positioning and Indoor Navigation (IPIN) conference holds an annual competition in which indoor localization systems from different research groups worldwide are evaluated empirically. The objective of this competition is to establish a systematic evaluation methodology with rigorous metrics both for real-time (on-site) and post-processing (off-site) situations, in a realistic environment unfamiliar to the prototype developers. For the IPIN 2018 conference, this competition was held on September 22nd, 2018, in Atlantis, a large shopping mall in Nantes (France). Four competition tracks (two on-site and two off-site) were designed. They consisted of several 1 km routes traversing several floors of the mall. Along these paths, 180 points were topographically surveyed with a 10 cm accuracy, to serve as ground truth landmarks, combining theodolite measurements, differential global navigation satellite system (GNSS) and 3D scanner systems. 34 teams effectively competed. The accuracy score corresponds to the third quartile (75th percentile) of an error metric that combines the horizontal positioning error and the floor detection. The best results for the on-site tracks showed an accuracy score of 11.70 m (Track 1) and 5.50 m (Track 2), while the best results for the off-site tracks showed an accuracy score of 0.90 m (Track 3) and 1.30 m (Track 4). These results showed that it is possible to obtain high accuracy indoor positioning solutions in large, realistic environments using wearable light-weight sensors without deploying any beacon. This paper describes the organization work of the tracks, analyzes the methodology used to quantify the results, reviews the lessons learned from the competition and discusses its future

    The Evolution of First Person Vision Methods: A Survey

    Full text link
    The emergence of new wearable technologies such as action cameras and smart-glasses has increased the interest of computer vision scientists in the First Person perspective. Nowadays, this field is attracting attention and investments of companies aiming to develop commercial devices with First Person Vision recording capabilities. Due to this interest, an increasing demand of methods to process these videos, possibly in real-time, is expected. Current approaches present a particular combinations of different image features and quantitative methods to accomplish specific objectives like object detection, activity recognition, user machine interaction and so on. This paper summarizes the evolution of the state of the art in First Person Vision video analysis between 1997 and 2014, highlighting, among others, most commonly used features, methods, challenges and opportunities within the field.Comment: First Person Vision, Egocentric Vision, Wearable Devices, Smart Glasses, Computer Vision, Video Analytics, Human-machine Interactio

    Resilient Infrastructure and Building Security

    Get PDF

    Improving Accuracy in Ultra-Wideband Indoor Position Tracking through Noise Modeling and Augmentation

    Get PDF
    The goal of this research is to improve the precision in tracking of an ultra-wideband (UWB) based Local Positioning System (LPS). This work is motivated by the approach taken to improve the accuracies in the Global Positioning System (GPS), through noise modeling and augmentation. Since UWB indoor position tracking is accomplished using methods similar to that of the GPS, the same two general approaches can be used to improve accuracy. Trilateration calculations are affected by errors in distance measurements from the set of fixed points to the object of interest. When these errors are systemic, each distinct set of fixed points can be said to exhibit a unique set noise. For UWB indoor position tracking, the set of fixed points is a set of sensors measuring the distance to a tracked tag. In this work we develop a noise model for this sensor set noise, along with a particle filter that uses our set noise model. To the author\u27s knowledge, this noise has not been identified and modeled for an LPS. We test our methods on a commercially available UWB system in a real world setting. From the results we observe approximately 15% improvement in accuracy over raw UWB measurements. The UWB system is an example of an aided sensor since it requires a person to carry a device which continuously broadcasts its identity to determine its location. Therefore the location of each user is uniquely known even when there are multiple users present. However, it suffers from limited precision as compared to some unaided sensors such as a camera which typically are placed line of sight (LOS). An unaided system does not require active participation from people. Therefore it has more difficulty in uniquely identifying the location of each person when there are a large number of people present in the tracking area. Therefore we develop a generalized fusion framework to combine measurements from aided and unaided systems to improve the tracking precision of the aided system and solve data association issues in the unaided system. The framework uses a Kalman filter to fuse measurements from multiple sensors. We test our approach on two unaided sensor systems: Light Detection And Ranging (LADAR) and a camera system. Our study investigates the impact of increasing the number of people in an indoor environment on the accuracies using a proposed fusion framework. From the results we observed that depending on the type of unaided sensor system used for augmentation, the improvement in precision ranged from 6-25% for up to 3 people

    Indoor pedestrian dead reckoning calibration by visual tracking and map information

    Get PDF
    Currently, Pedestrian Dead Reckoning (PDR) systems are becoming more attractive in market of indoor positioning. This is mainly due to the development of cheap and light Micro Electro-Mechanical Systems (MEMS) on smartphones and less requirement of additional infrastructures in indoor areas. However, it still faces the problem of drift accumulation and needs the support from external positioning systems. Vision-aided inertial navigation, as one possible solution to that problem, has become very popular in indoor localization with satisfied performance than individual PDR system. In the literature however, previous studies use fixed platform and the visual tracking uses feature-extraction-based methods. This paper instead contributes a distributed implementation of positioning system and uses deep learning for visual tracking. Meanwhile, as both inertial navigation and optical system can only provide relative positioning information, this paper contributes a method to integrate digital map with real geographical coordinates to supply absolute location. This hybrid system has been tested on two common operation systems of smartphones as iOS and Android, based on corresponded data collection apps respectively, in order to test the robustness of method. It also uses two different ways for calibration, by time synchronization of positions and heading calibration based on time steps. According to the results, localization information collected from both operation systems has been significantly improved after integrating with visual tracking data
    • …
    corecore