13 research outputs found

    Vision-Aided Indoor Pedestrian Dead Reckoning

    Get PDF
    Vision-aided inertial navigation has become a more popular method for indoor positioning recently. This popularity is basically due to the development of light-weighted and low-cost Micro Electro-Mechanical Systems (MEMS) as well as advancement and availability of CCD cameras in public indoor area. While the use of inertial sensors and cameras are limited to the challenge of drift accumulation and object detection in line of sight, respectively, the integration of these two sensors can compensate their drawbacks and provide more accurate positioning solutions. This study builds up upon earlier research on “Vision-Aided Indoor Pedestrian Tracking System”, to address challenges of indoor positioning by providing more accurate and seamless solutions. The study improves the overall design and implementation of inertial sensor fusion for indoor applications. In this regard, genuine indoor maps and geographical information, i.e. digitized floor plans, are used for visual tracking application the pilot study. Both of inertial positioning and visual tracking components can work stand-alone with additional location information from the maps. In addition, while the visual tracking component can help to calibrate pedestrian dead reckoning and provides better accuracy, inertial sensing module can alternatively be used for positioning and tracking when the user cannot be detected by the camera until being detected in video again. The mean accuracy of this positioning system is 10.98% higher than uncalibrated inertial positioning during experiment

    Overview of positioning technologies from fitness-to-purpose point of view

    Get PDF
    Even though Location Based Services (LBSs) are being more and more widely-used and this shows a promising future, there are still many challenges to deal with, such as privacy, reliability, accuracy, cost of service, power consumption and availability. There is still no single low-cost positioning technology which provides position of its users seamlessly indoors and outdoors with an acceptable level of accuracy and low power consumption. For this reason, fitness of positioning service to the purpose of LBS application is an important parameter to be considered when choosing the most suitable positioning technology for an LBS. This should be done for any LBS application, since each application may need different requirements. Some location-based applications, such as location-based advertisements or Location-Based Social Networking (LBSN), do not need very accurate positioning input data, while for some others, e.g. navigation and tracking services, highly-accurate positioning is essential. This paper evaluates different positioning technologies from fitness-to-purpose point of view for two different applications, public transport information and family/friend tracking

    A pedestrian navigation system based on low cost IMU

    Full text link
    © 2014 The Royal Institute of Navigation. For indoor pedestrian navigation with a shoe-mounted inertial measurement unit (IMU, the zero velocity update (ZUPT technique is implemented to constrain the sensors' error. ZUPT uses the fact that a stance phase appears in each step at zero velocity to correct IMU errors periodically. This paper introduces three main contributions we have achieved based on ZUPT. Since correct stance phase detection is critical for the success of applying ZUPT, we have developed a new approach to detect the stance phase of different gait styles, including walking, running and stair climbing. As the extension of ZUPT, we have proposed a new concept called constant velocity update (CUPT to correct IMU errors on a moving platform with constant velocity, such as elevators or escalators where ZUPT is infeasible. A closed-loop step-wise smoothing algorithm has also been developed to eliminate discontinuities in the trajectory caused by sharp corrections. Experimental results demonstrate the effectiveness of the proposed algorithms

    A Navigation and Augmented Reality System for Visually Impaired People

    Get PDF
    In recent years, we have assisted with an impressive advance in augmented reality systems and computer vision algorithms, based on image processing and artificial intelligence. Thanks to these technologies, mainstream smartphones are able to estimate their own motion in 3D space with high accuracy. In this paper, we exploit such technologies to support the autonomous mobility of people with visual disabilities, identifying pre-defined virtual paths and providing context information, reducing the distance between the digital and real worlds. In particular, we present ARIANNA+, an extension of ARIANNA, a system explicitly designed for visually impaired people for indoor and outdoor localization and navigation. While ARIANNA is based on the assumption that landmarks, such as QR codes, and physical paths (composed of colored tapes, painted lines, or tactile pavings) are deployed in the environment and recognized by the camera of a common smartphone, ARIANNA+ eliminates the need for any physical support thanks to the ARKit library, which we exploit to build a completely virtual path. Moreover, ARIANNA+ adds the possibility for the users to have enhanced interactions with the surrounding environment, through convolutional neural networks (CNNs) trained to recognize objects or buildings and enabling the possibility of accessing contents associated with them. By using a common smartphone as a mediation instrument with the environment, ARIANNA+ leverages augmented reality and machine learning for enhancing physical accessibility. The proposed system allows visually impaired people to easily navigate in indoor and outdoor scenarios simply by loading a previously recorded virtual path and providing automatic guidance along the route, through haptic, speech, and sound feedback

    Indoor pedestrian dead reckoning calibration by visual tracking and map information

    Get PDF
    Currently, Pedestrian Dead Reckoning (PDR) systems are becoming more attractive in market of indoor positioning. This is mainly due to the development of cheap and light Micro Electro-Mechanical Systems (MEMS) on smartphones and less requirement of additional infrastructures in indoor areas. However, it still faces the problem of drift accumulation and needs the support from external positioning systems. Vision-aided inertial navigation, as one possible solution to that problem, has become very popular in indoor localization with satisfied performance than individual PDR system. In the literature however, previous studies use fixed platform and the visual tracking uses feature-extraction-based methods. This paper instead contributes a distributed implementation of positioning system and uses deep learning for visual tracking. Meanwhile, as both inertial navigation and optical system can only provide relative positioning information, this paper contributes a method to integrate digital map with real geographical coordinates to supply absolute location. This hybrid system has been tested on two common operation systems of smartphones as iOS and Android, based on corresponded data collection apps respectively, in order to test the robustness of method. It also uses two different ways for calibration, by time synchronization of positions and heading calibration based on time steps. According to the results, localization information collected from both operation systems has been significantly improved after integrating with visual tracking data

    VISION-AIDED CONTEXT-AWARE FRAMEWORK FOR PERSONAL NAVIGATION SERVICES

    Get PDF

    Indoor location based services challenges, requirements and usability of current solutions

    Get PDF
    Indoor Location Based Services (LBS), such as indoor navigation and tracking, still have to deal with both technical and non-technical challenges. For this reason, they have not yet found a prominent position in people’s everyday lives. Reliability and availability of indoor positioning technologies, the availability of up-to-date indoor maps, and privacy concerns associated with location data are some of the biggest challenges to their development. If these challenges were solved, or at least minimized, there would be more penetration into the user market. This paper studies the requirements of LBS applications, through a survey conducted by the authors, identifies the current challenges of indoor LBS, and reviews the available solutions that address the most important challenge, that of providing seamless indoor/outdoor positioning. The paper also looks at the potential of emerging solutions and the technologies that may help to handle this challenge

    Development of a Standalone Pedestrian Navigation System Utilizing Sensor Fusion Strategies

    Get PDF
    Pedestrian inertial navigation systems yield the foundational information required for many possible indoor navigation and positioning services and applications, but current systems have difficulty providing accurate locational information due to system instability. Through the implementation of a low-cost ultrasonic ranging device added to a foot-mounted inertial navigation system, the ability to detect surrounding obstacles, such as walls, is granted. Using these detected walls as a basis of correction, an intuitive algorithm that can be added to already established systems was developed that allows for the demonstrable reduction of final location errors. After a 160 m walk, final location errors were reduced from 8.9 m to 0.53 m, a reduction of 5.5% of the total distance walked. Furthermore, during a 400 m walk the peak error was reduced from 10.3 m to 1.43 m. With long term system accuracy and stability being largely dependent on the ability of gyroscopes to accurately estimate changes in yaw angle, the purposed system helps correct these inaccuracies, providing strong plausible implementation in obstacle rich environments such as those found indoors

    Fusion of non-visual and visual sensors for human tracking

    Get PDF
    Human tracking is an extensively researched yet still challenging area in the Computer Vision field, with a wide range of applications such as surveillance and healthcare. People may not be successfully tracked with merely the visual information in challenging cases such as long-term occlusion. Thus, we propose to combine information from other sensors with the surveillance cameras to persistently localize and track humans, which is becoming more promising with the pervasiveness of mobile devices such as cellphones, smart watches and smart glasses embedded with all kinds of sensors including accelerometers, gyroscopes, magnetometers, GPS, WiFi modules and so on. In this thesis, we firstly investigate the application of Inertial Measurement Unit (IMU) from mobile devices to human activity recognition and human tracking, we then develop novel persistent human tracking and indoor localization algorithms by the fusion of non-visual sensors and visual sensors, which not only overcomes the occlusion challenge in visual tracking, but also alleviates the calibration and drift problems in IMU tracking --Abstract, page iii
    corecore