273 research outputs found
Map matching by using inertial sensors: literature review
This literature review aims to clarify what is known about map matching by
using inertial sensors and what are the requirements for map matching, inertial
sensors, placement and possible complementary position technology. The target
is to develop a wearable location system that can position itself within a complex
construction environment automatically with the aid of an accurate building model.
The wearable location system should work on a tablet computer which is running
an augmented reality (AR) solution and is capable of track and visualize 3D-CAD
models in real environment. The wearable location system is needed to support the
system in initialization of the accurate camera pose calculation and automatically
finding the right location in the 3D-CAD model. One type of sensor which does seem
applicable to people tracking is inertial measurement unit (IMU). The IMU sensors
in aerospace applications, based on laser based gyroscopes, are big but provide a
very accurate position estimation with a limited drift. Small and light units such
as those based on Micro-Electro-Mechanical (MEMS) sensors are becoming very
popular, but they have a significant bias and therefore suffer from large drifts and
require method for calibration like map matching. The system requires very little
fixed infrastructure, the monetary cost is proportional to the number of users, rather
than to the coverage area as is the case for traditional absolute indoor location
systems.Siirretty Doriast
Providing location everywhere
Anacleto R., Figueiredo L., Novais P., Almeida A., Providing Location Everywhere, in Progress in Artificial Intelligence, Antunes L., Sofia Pinto H. (eds), Lecture Notes in Artificial Intelligence 7026, Springer-Verlag, ISBN 978-3-540-24768-2, (Proceedings of the 15th Portuguese conference on Artificial Intelligence - EPIA 2011, Lisboa, Portugal), pp 15-28, 2011.The ability to locate an individual is an essential part of many applications, specially the mobile ones. Obtaining this location
in an open environment is relatively simple through GPS (Global Positioning System), but indoors or even in dense environments this type of
location system doesn’t provide a good accuracy. There are already systems that try to suppress these limitations, but most of them need the
existence of a structured environment to work. Since Inertial Navigation Systems (INS) try to suppress the need of a structured environment we
propose an INS based on Micro Electrical Mechanical Systems (MEMS) that is capable of, in real time, compute the position of an individual everywhere
It's the Human that Matters: Accurate User Orientation Estimation for Mobile Computing Applications
Ubiquity of Internet-connected and sensor-equipped portable devices sparked a
new set of mobile computing applications that leverage the proliferating
sensing capabilities of smart-phones. For many of these applications, accurate
estimation of the user heading, as compared to the phone heading, is of
paramount importance. This is of special importance for many crowd-sensing
applications, where the phone can be carried in arbitrary positions and
orientations relative to the user body. Current state-of-the-art focus mainly
on estimating the phone orientation, require the phone to be placed in a
particular position, require user intervention, and/or do not work accurately
indoors; which limits their ubiquitous usability in different applications. In
this paper we present Humaine, a novel system to reliably and accurately
estimate the user orientation relative to the Earth coordinate system.
Humaine requires no prior-configuration nor user intervention and works
accurately indoors and outdoors for arbitrary cell phone positions and
orientations relative to the user body. The system applies statistical analysis
techniques to the inertial sensors widely available on today's cell phones to
estimate both the phone and user orientation. Implementation of the system on
different Android devices with 170 experiments performed at different indoor
and outdoor testbeds shows that Humaine significantly outperforms the
state-of-the-art in diverse scenarios, achieving a median accuracy of
averaged over a wide variety of phone positions. This is
better than the-state-of-the-art. The accuracy is bounded by the error in the
inertial sensors readings and can be enhanced with more accurate sensors and
sensor fusion.Comment: Accepted for publication in the 11th International Conference on
Mobile and Ubiquitous Systems: Computing, Networking and Services
(Mobiquitous 2014
Map Matching by Using Inertial Sensors – Literature Review
This literature review aims to clarify what is known about map matching by
using inertial sensors and what are the requirements for map matching, inertial
sensors, placement and possible complementary position technology. The target
is to develop a wearable location system that can position itself within a complex
construction environment automatically with the aid of an accurate building model.
The wearable location system should work on a tablet computer which is running
an augmented reality (AR) solution and is capable of track and visualize 3D-CAD
models in real environment. The wearable location system is needed to support the
system in initialization of the accurate camera pose calculation and automatically
finding the right location in the 3D-CAD model. One type of sensor which does seem
applicable to people tracking is inertial measurement unit (IMU). The IMU sensors
in aerospace applications, based on laser based gyroscopes, are big but provide a
very accurate position estimation with a limited drift. Small and light units such
as those based on Micro-Electro-Mechanical (MEMS) sensors are becoming very
popular, but they have a signicant bias and therefore suffer from large drifts and
require method for calibration like map matching. The system requires very little
fixed infrastructure, the monetary cost is proportional to the number of users, rather
than to the coverage area as is the case for traditional absolute indoor location
systems.</p
Cooperative localization by dual foot-mounted inertial sensors and inter-agent ranging
The implementation challenges of cooperative localization by dual
foot-mounted inertial sensors and inter-agent ranging are discussed and work on
the subject is reviewed. System architecture and sensor fusion are identified
as key challenges. A partially decentralized system architecture based on
step-wise inertial navigation and step-wise dead reckoning is presented. This
architecture is argued to reduce the computational cost and required
communication bandwidth by around two orders of magnitude while only giving
negligible information loss in comparison with a naive centralized
implementation. This makes a joint global state estimation feasible for up to a
platoon-sized group of agents. Furthermore, robust and low-cost sensor fusion
for the considered setup, based on state space transformation and
marginalization, is presented. The transformation and marginalization are used
to give the necessary flexibility for presented sampling based updates for the
inter-agent ranging and ranging free fusion of the two feet of an individual
agent. Finally, characteristics of the suggested implementation are
demonstrated with simulations and a real-time system implementation.Comment: 14 page
Evaluating indoor positioning systems in a shopping mall : the lessons learned from the IPIN 2018 competition
The Indoor Positioning and Indoor Navigation (IPIN) conference holds an annual competition in which indoor localization systems from different research groups worldwide are evaluated empirically. The objective of this competition is to establish a systematic evaluation methodology with rigorous metrics both for real-time (on-site) and post-processing (off-site) situations, in a realistic environment unfamiliar to the prototype developers. For the IPIN 2018 conference, this competition was held on September 22nd, 2018, in Atlantis, a large shopping mall in Nantes (France). Four competition tracks (two on-site and two off-site) were designed. They consisted of several 1 km routes traversing several floors of the mall. Along these paths, 180 points were topographically surveyed with a 10 cm accuracy, to serve as ground truth landmarks, combining theodolite measurements, differential global navigation satellite system (GNSS) and 3D scanner systems. 34 teams effectively competed. The accuracy score corresponds to the third quartile (75th percentile) of an error metric that combines the horizontal positioning error and the floor detection. The best results for the on-site tracks showed an accuracy score of 11.70 m (Track 1) and 5.50 m (Track 2), while the best results for the off-site tracks showed an accuracy score of 0.90 m (Track 3) and 1.30 m (Track 4). These results showed that it is possible to obtain high accuracy indoor positioning solutions in large, realistic environments using wearable light-weight sensors without deploying any beacon. This paper describes the organization work of the tracks, analyzes the methodology used to quantify the results, reviews the lessons learned from the competition and discusses its future
Multimodal Sensing for Robust and Energy-Efficient Context Detection with Smart Mobile Devices
Adoption of smart mobile devices (smartphones, wearables, etc.) is rapidly growing. There are already over 2 billion smartphone users worldwide [1] and the percentage of smartphone users is expected to be over 50% in the next five years [2]. These devices feature rich sensing capabilities which allow inferences about mobile device user’s surroundings and behavior. Multiple and diverse sensors common on such mobile devices facilitate observing the environment from different perspectives, which helps to increase robustness of inferences and enables more complex context detection tasks. Though a larger number of sensing modalities can be beneficial for more accurate and wider mobile context detection, integrating these sensor streams is non-trivial.
This thesis presents how multimodal sensor data can be integrated to facilitate ro- bust and energy efficient mobile context detection, considering three important and challenging detection tasks: indoor localization, indoor-outdoor detection and human activity recognition. This thesis presents three methods for multimodal sensor inte- gration, each applied for a different type of context detection task considered in this thesis. These are gradually decreasing in design complexity, starting with a solution based on an engineering approach decomposing context detection to simpler tasks and integrating these with a particle filter for indoor localization. This is followed by man- ual extraction of features from different sensors and using an adaptive machine learn- ing technique called semi-supervised learning for indoor-outdoor detection. Finally, a method using deep neural networks capable of extracting non-intuitive features di- rectly from raw sensor data is used for human activity recognition; this method also provides higher degree of generalization to other context detection tasks.
Energy efficiency is an important consideration in general for battery powered mo- bile devices and context detection is no exception. In the various context detection tasks and solutions presented in this thesis, particular attention is paid to this issue by relying largely on sensors that consume low energy and on lightweight computations. Overall, the solutions presented improve on the state of the art in terms of accuracy and robustness while keeping the energy consumption low, making them practical for use on mobile devices
- …