2,944 research outputs found
Evaluating indoor positioning systems in a shopping mall : the lessons learned from the IPIN 2018 competition
The Indoor Positioning and Indoor Navigation (IPIN) conference holds an annual competition in which indoor localization systems from different research groups worldwide are evaluated empirically. The objective of this competition is to establish a systematic evaluation methodology with rigorous metrics both for real-time (on-site) and post-processing (off-site) situations, in a realistic environment unfamiliar to the prototype developers. For the IPIN 2018 conference, this competition was held on September 22nd, 2018, in Atlantis, a large shopping mall in Nantes (France). Four competition tracks (two on-site and two off-site) were designed. They consisted of several 1 km routes traversing several floors of the mall. Along these paths, 180 points were topographically surveyed with a 10 cm accuracy, to serve as ground truth landmarks, combining theodolite measurements, differential global navigation satellite system (GNSS) and 3D scanner systems. 34 teams effectively competed. The accuracy score corresponds to the third quartile (75th percentile) of an error metric that combines the horizontal positioning error and the floor detection. The best results for the on-site tracks showed an accuracy score of 11.70 m (Track 1) and 5.50 m (Track 2), while the best results for the off-site tracks showed an accuracy score of 0.90 m (Track 3) and 1.30 m (Track 4). These results showed that it is possible to obtain high accuracy indoor positioning solutions in large, realistic environments using wearable light-weight sensors without deploying any beacon. This paper describes the organization work of the tracks, analyzes the methodology used to quantify the results, reviews the lessons learned from the competition and discusses its future
Distributed and adaptive location identification system for mobile devices
Indoor location identification and navigation need to be as simple, seamless,
and ubiquitous as its outdoor GPS-based counterpart is. It would be of great
convenience to the mobile user to be able to continue navigating seamlessly as
he or she moves from a GPS-clear outdoor environment into an indoor environment
or a GPS-obstructed outdoor environment such as a tunnel or forest. Existing
infrastructure-based indoor localization systems lack such capability, on top
of potentially facing several critical technical challenges such as increased
cost of installation, centralization, lack of reliability, poor localization
accuracy, poor adaptation to the dynamics of the surrounding environment,
latency, system-level and computational complexities, repetitive
labor-intensive parameter tuning, and user privacy. To this end, this paper
presents a novel mechanism with the potential to overcome most (if not all) of
the abovementioned challenges. The proposed mechanism is simple, distributed,
adaptive, collaborative, and cost-effective. Based on the proposed algorithm, a
mobile blind device can potentially utilize, as GPS-like reference nodes,
either in-range location-aware compatible mobile devices or preinstalled
low-cost infrastructure-less location-aware beacon nodes. The proposed approach
is model-based and calibration-free that uses the received signal strength to
periodically and collaboratively measure and update the radio frequency
characteristics of the operating environment to estimate the distances to the
reference nodes. Trilateration is then used by the blind device to identify its
own location, similar to that used in the GPS-based system. Simulation and
empirical testing ascertained that the proposed approach can potentially be the
core of future indoor and GPS-obstructed environments
Machine Learning for Indoor Localization Using Mobile Phone-Based Sensors
In this paper we investigate the problem of localizing a mobile device based
on readings from its embedded sensors utilizing machine learning methodologies.
We consider a real-world environment, collect a large dataset of 3110
datapoints, and examine the performance of a substantial number of machine
learning algorithms in localizing a mobile device. We have found algorithms
that give a mean error as accurate as 0.76 meters, outperforming other indoor
localization systems reported in the literature. We also propose a hybrid
instance-based approach that results in a speed increase by a factor of ten
with no loss of accuracy in a live deployment over standard instance-based
methods, allowing for fast and accurate localization. Further, we determine how
smaller datasets collected with less density affect accuracy of localization,
important for use in real-world environments. Finally, we demonstrate that
these approaches are appropriate for real-world deployment by evaluating their
performance in an online, in-motion experiment.Comment: 6 pages, 4 figure
Multiverse: Mobility pattern understanding improves localization accuracy
Department of Computer Science and EngineeringThis paper presents the design and implementation of Multiverse, a practical indoor localization system that can be deployed on top of already existing WiFi infrastructure. Although the existing WiFi-based positioning techniques achieve acceptable accuracy levels, we find that existing solutions are not practical for use in buildings due to a requirement of installing sophisticated access point (AP) hardware or special application on client devices to aid the system with extra information. Multiverse achieves sub-room precision estimates, while utilizing only received signal strength indication (RSSI) readings available to most of today's buildings through their installed APs, along with the assumption that most users would walk at the normal speed. This level of simplicity would promote ubiquity of indoor localization in the era of smartphones.ope
RIDI: Robust IMU Double Integration
This paper proposes a novel data-driven approach for inertial navigation,
which learns to estimate trajectories of natural human motions just from an
inertial measurement unit (IMU) in every smartphone. The key observation is
that human motions are repetitive and consist of a few major modes (e.g.,
standing, walking, or turning). Our algorithm regresses a velocity vector from
the history of linear accelerations and angular velocities, then corrects
low-frequency bias in the linear accelerations, which are integrated twice to
estimate positions. We have acquired training data with ground-truth motions
across multiple human subjects and multiple phone placements (e.g., in a bag or
a hand). The qualitatively and quantitatively evaluations have demonstrated
that our algorithm has surprisingly shown comparable results to full Visual
Inertial navigation. To our knowledge, this paper is the first to integrate
sophisticated machine learning techniques with inertial navigation, potentially
opening up a new line of research in the domain of data-driven inertial
navigation. We will publicly share our code and data to facilitate further
research
- …