3,256 research outputs found

    Multi sensor system for pedestrian tracking and activity recognition in indoor environments

    Get PDF
    The widespread use of mobile devices and the rise of Global Navigation Satellite Systems (GNSS) have allowed mobile tracking applications to become very popular and valuable in outdoor environments. However, tracking pedestrians in indoor environments with Global Positioning System (GPS)-based schemes is still very challenging. Along with indoor tracking, the ability to recognize pedestrian behavior and activities can lead to considerable growth in location-based applications including pervasive healthcare, leisure and guide services (such as, hospitals, museums, airports, etc.), and emergency services, among the most important ones. This paper presents a system for pedestrian tracking and activity recognition in indoor environments using exclusively common off-the-shelf sensors embedded in smartphones (accelerometer, gyroscope, magnetometer and barometer). The proposed system combines the knowledge found in biomechanical patterns of the human body while accomplishing basic activities, such as walking or climbing stairs up and down, along with identifiable signatures that certain indoor locations (such as turns or elevators) introduce on sensing data. The system was implemented and tested on Android-based mobile phones. The system detects and counts steps with an accuracy of 97% and 96:67% in flat floor and stairs, respectively; detects user changes of direction and altitude with 98:88% and 96:66% accuracy, respectively; and recognizes the proposed human activities with a 95% accuracy. All modules combined lead to a total tracking accuracy of 91:06% in common human motion indoor displacement

    Toward a unified PNT, Part 1: Complexity and context: Key challenges of multisensor positioning

    Get PDF
    The next generation of navigation and positioning systems must provide greater accuracy and reliability in a range of challenging environments to meet the needs of a variety of mission-critical applications. No single navigation technology is robust enough to meet these requirements on its own, so a multisensor solution is required. Known environmental features, such as signs, buildings, terrain height variation, and magnetic anomalies, may or may not be available for positioning. The system could be stationary, carried by a pedestrian, or on any type of land, sea, or air vehicle. Furthermore, for many applications, the environment and host behavior are subject to change. A multi-sensor solution is thus required. The expert knowledge problem is compounded by the fact that different modules in an integrated navigation system are often supplied by different organizations, who may be reluctant to share necessary design information if this is considered to be intellectual property that must be protected

    Evaluating indoor positioning systems in a shopping mall : the lessons learned from the IPIN 2018 competition

    Get PDF
    The Indoor Positioning and Indoor Navigation (IPIN) conference holds an annual competition in which indoor localization systems from different research groups worldwide are evaluated empirically. The objective of this competition is to establish a systematic evaluation methodology with rigorous metrics both for real-time (on-site) and post-processing (off-site) situations, in a realistic environment unfamiliar to the prototype developers. For the IPIN 2018 conference, this competition was held on September 22nd, 2018, in Atlantis, a large shopping mall in Nantes (France). Four competition tracks (two on-site and two off-site) were designed. They consisted of several 1 km routes traversing several floors of the mall. Along these paths, 180 points were topographically surveyed with a 10 cm accuracy, to serve as ground truth landmarks, combining theodolite measurements, differential global navigation satellite system (GNSS) and 3D scanner systems. 34 teams effectively competed. The accuracy score corresponds to the third quartile (75th percentile) of an error metric that combines the horizontal positioning error and the floor detection. The best results for the on-site tracks showed an accuracy score of 11.70 m (Track 1) and 5.50 m (Track 2), while the best results for the off-site tracks showed an accuracy score of 0.90 m (Track 3) and 1.30 m (Track 4). These results showed that it is possible to obtain high accuracy indoor positioning solutions in large, realistic environments using wearable light-weight sensors without deploying any beacon. This paper describes the organization work of the tracks, analyzes the methodology used to quantify the results, reviews the lessons learned from the competition and discusses its future

    Fusion of non-visual and visual sensors for human tracking

    Get PDF
    Human tracking is an extensively researched yet still challenging area in the Computer Vision field, with a wide range of applications such as surveillance and healthcare. People may not be successfully tracked with merely the visual information in challenging cases such as long-term occlusion. Thus, we propose to combine information from other sensors with the surveillance cameras to persistently localize and track humans, which is becoming more promising with the pervasiveness of mobile devices such as cellphones, smart watches and smart glasses embedded with all kinds of sensors including accelerometers, gyroscopes, magnetometers, GPS, WiFi modules and so on. In this thesis, we firstly investigate the application of Inertial Measurement Unit (IMU) from mobile devices to human activity recognition and human tracking, we then develop novel persistent human tracking and indoor localization algorithms by the fusion of non-visual sensors and visual sensors, which not only overcomes the occlusion challenge in visual tracking, but also alleviates the calibration and drift problems in IMU tracking --Abstract, page iii

    RGBD Datasets: Past, Present and Future

    Full text link
    Since the launch of the Microsoft Kinect, scores of RGBD datasets have been released. These have propelled advances in areas from reconstruction to gesture recognition. In this paper we explore the field, reviewing datasets across eight categories: semantics, object pose estimation, camera tracking, scene reconstruction, object tracking, human actions, faces and identification. By extracting relevant information in each category we help researchers to find appropriate data for their needs, and we consider which datasets have succeeded in driving computer vision forward and why. Finally, we examine the future of RGBD datasets. We identify key areas which are currently underexplored, and suggest that future directions may include synthetic data and dense reconstructions of static and dynamic scenes.Comment: 8 pages excluding references (CVPR style
    • …
    corecore