2,828 research outputs found

    Evaluating indoor positioning systems in a shopping mall : the lessons learned from the IPIN 2018 competition

    Get PDF
    The Indoor Positioning and Indoor Navigation (IPIN) conference holds an annual competition in which indoor localization systems from different research groups worldwide are evaluated empirically. The objective of this competition is to establish a systematic evaluation methodology with rigorous metrics both for real-time (on-site) and post-processing (off-site) situations, in a realistic environment unfamiliar to the prototype developers. For the IPIN 2018 conference, this competition was held on September 22nd, 2018, in Atlantis, a large shopping mall in Nantes (France). Four competition tracks (two on-site and two off-site) were designed. They consisted of several 1 km routes traversing several floors of the mall. Along these paths, 180 points were topographically surveyed with a 10 cm accuracy, to serve as ground truth landmarks, combining theodolite measurements, differential global navigation satellite system (GNSS) and 3D scanner systems. 34 teams effectively competed. The accuracy score corresponds to the third quartile (75th percentile) of an error metric that combines the horizontal positioning error and the floor detection. The best results for the on-site tracks showed an accuracy score of 11.70 m (Track 1) and 5.50 m (Track 2), while the best results for the off-site tracks showed an accuracy score of 0.90 m (Track 3) and 1.30 m (Track 4). These results showed that it is possible to obtain high accuracy indoor positioning solutions in large, realistic environments using wearable light-weight sensors without deploying any beacon. This paper describes the organization work of the tracks, analyzes the methodology used to quantify the results, reviews the lessons learned from the competition and discusses its future

    A Review of Hybrid Indoor Positioning Systems Employing WLAN Fingerprinting and Image Processing

    Get PDF
    Location-based services (LBS) are a significant permissive technology. One of the main components in indoor LBS is the indoor positioning system (IPS). IPS utilizes many existing technologies such as radio frequency, images, acoustic signals, as well as magnetic sensors, thermal sensors, optical sensors, and other sensors that are usually installed in a mobile device. The radio frequency technologies used in IPS are WLAN, Bluetooth, Zig Bee, RFID, frequency modulation, and ultra-wideband. This paper explores studies that have combined WLAN fingerprinting and image processing to build an IPS. The studies on combined WLAN fingerprinting and image processing techniques are divided based on the methods used. The first part explains the studies that have used WLAN fingerprinting to support image positioning. The second part examines works that have used image processing to support WLAN fingerprinting positioning. Then, image processing and WLAN fingerprinting are used in combination to build IPS in the third part. A new concept is proposed at the end for the future development of indoor positioning models based on WLAN fingerprinting and supported by image processing to solve the effect of people presence around users and the user orientation problem

    Human Crowdsourcing Data for Indoor Location Applied to Ambient Assisted Living Scenarios

    Get PDF
    In the last decades, the rise of life expectancy has accelerated the demand for new technological solutions to provide a longer life with improved quality. One of the major areas of the Ambient Assisted Living aims to monitor the elderly location indoors. For this purpose, indoor positioning systems are valuable tools and can be classified depending on the need of a supporting infrastructure. Infrastructure-based systems require the investment on expensive equipment and existing infrastructure-free systems, although rely on the pervasively available characteristics of the buildings, present some limitations regarding the extensive process of acquiring and maintaining fingerprints, the maps that store the environmental characteristics to be used in the localisation phase. These problems hinder indoor positioning systems to be deployed in most scenarios. To overcome these limitations, an algorithm for the automatic construction of indoor floor plans and environmental fingerprints is proposed. With the use of crowdsourcing techniques, where the extensiveness of a task is reduced with the help of a large undefined group of users, the algorithm relies on the combination ofmultiple sources of information, collected in a non-annotated way by common smartphones. The crowdsourced data is composed by inertial sensors, responsible for estimating the users’ trajectories, Wi-Fi radio and magnetic field signals. Wi-Fi radio data is used to cluster the trajectories into smaller groups, each corresponding to specific areas of the building. Distance metrics applied to magnetic field signals are used to identify geomagnetic similarities between different users’ trajectories. The building’s floor plan is then automatically created, which results in fingerprints labelled with physical locations. Experimental results show that the proposed algorithm achieved comparable floor plan and fingerprints to those acquired manually, allowing the conclusion that is possible to automate the setup process of infrastructure-free systems. With these results, this solution can be applied in any fingerprinting-based indoor positioning system

    Dynamic spatial segmentation strategy based magnetic field indoor positioning system

    Get PDF
    In this day and age, it is imperative for anyone who relies on a mobile device to track and navigate themselves using the Global Positioning System (GPS). Such satellite-based positioning works as intended when in the outdoors, or when the device is able to have unobstructed communication with GPS satellites. Nevertheless, at the same time, GPS signal fades away in indoor environments due to the effects of multi-path components and obstructed line-of-sight to the satellite. Therefore, numerous indoor localisation applications have emerged in the market, geared towards finding a practical solution to satisfy the need for accuracy and efficiency. The case of Indoor Positioning System (IPS) is promoted by recent smart devices, which have evolved into a multimedia device with various sensors and optimised connectivity. By sensing the device’s surroundings and inferring its context, current IPS technology has proven its ability to provide stable and reliable indoor localisation information. However, such a system is usually dependent on a high-density of infrastructure that requires expensive installations (e.g. Wi-Fi-based IPS). To make a trade-off between accuracy and cost, considerable attention from many researchers has been paid to the range of infrastructure-free technologies, particularly exploiting the earth’s magnetic field (EMF). EMF is a promising signal type that features ubiquitous availability, location specificity and long-term stability. When considering the practicality of this typical signal in IPS, such a system only consists of mobile device and the EMF signal. To fully comprehend the conventional EMF-based IPS reported in the literature, a preliminary experimental study on indoor EMF characteristics was carried out at the beginning of this research. The results revealed that the positioning performance decreased when the presence of magnetic disturbance sources was lowered to a minimum. In response to this finding, a new concept of spatial segmentation is devised in this research based on magnetic anomaly (MA). Therefore, this study focuses on developing innovative techniques based on spatial segmentation strategy and machine learning algorithms for effective indoor localisation using EMF. In this thesis, four closely correlated components in the proposed system are included: (i) Kriging interpolation-based fingerprinting map; (ii) magnetic intensity-based spatial segmentation; (iii) weighted Naïve Bayes classification (WNBC); (iv) fused features-based k-Nearest-Neighbours (kNN) algorithm. Kriging interpolation-based fingerprinting map reconstructs the original observed EMF positioning database in the calibration phase by interpolating predicted points. The magnetic intensity-based spatial segmentation component then investigates the variation tendency of ambient EMF signals in the new database to analyse the distribution of magnetic disturbance sources, and accordingly, segmenting the test site. Then, WNBC blends the exclusive characteristics of indoor EMF into original Naïve Bayes Classification (NBC) to enable a more accurate and efficient segmentation approach. It is well known that the best IPS implementation often exerts the use of multiple positing sources in order to maximise accuracy. The fused features-based kNN component used in the positioning phase finally learns the various parameters collected in the calibration phase, continuously improving the positioning accuracy of the system. The proposed system was evaluated on multiple indoor sites with diverse layouts. The results show that it outperforms state-of-the-art approaches and demonstrate an average accuracy between 1-2 meters achieved in typical sites by the best methods proposed in this thesis across most of the experimental environments. It can be believed that such an accurate approach will enable the future of infrastructure–free IPS technologies

    3D Registration of Aerial and Ground Robots for Disaster Response: An Evaluation of Features, Descriptors, and Transformation Estimation

    Full text link
    Global registration of heterogeneous ground and aerial mapping data is a challenging task. This is especially difficult in disaster response scenarios when we have no prior information on the environment and cannot assume the regular order of man-made environments or meaningful semantic cues. In this work we extensively evaluate different approaches to globally register UGV generated 3D point-cloud data from LiDAR sensors with UAV generated point-cloud maps from vision sensors. The approaches are realizations of different selections for: a) local features: key-points or segments; b) descriptors: FPFH, SHOT, or ESF; and c) transformation estimations: RANSAC or FGR. Additionally, we compare the results against standard approaches like applying ICP after a good prior transformation has been given. The evaluation criteria include the distance which a UGV needs to travel to successfully localize, the registration error, and the computational cost. In this context, we report our findings on effectively performing the task on two new Search and Rescue datasets. Our results have the potential to help the community take informed decisions when registering point-cloud maps from ground robots to those from aerial robots.Comment: Awarded Best Paper at the 15th IEEE International Symposium on Safety, Security, and Rescue Robotics 2017 (SSRR 2017

    On the use of smartphones as novel photogrammetric water gauging instruments: Developing tools for crowdsourcing water levels

    Get PDF
    The term global climate change is omnipresent since the beginning of the last decade. Changes in the global climate are associated with an increase in heavy rainfalls that can cause nearly unpredictable flash floods. Consequently, spatio-temporally high-resolution monitoring of rivers becomes increasingly important. Water gauging stations continuously and precisely measure water levels. However, they are rather expensive in purchase and maintenance and are preferably installed at water bodies relevant for water management. Small-scale catchments remain often ungauged. In order to increase the data density of hydrometric monitoring networks and thus to improve the prediction quality of flood events, new, flexible and cost-effective water level measurement technologies are required. They should be oriented towards the accuracy requirements of conventional measurement systems and facilitate the observation of water levels at virtually any time, even at the smallest rivers. A possible solution is the development of a photogrammetric smartphone application (app) for crowdsourcing water levels, which merely requires voluntary users to take pictures of a river section to determine the water level. Today’s smartphones integrate high-resolution cameras, a variety of sensors, powerful processors, and mass storage. However, they are designed for the mass market and use low-cost hardware that cannot comply with the quality of geodetic measurement technology. In order to investigate the potential for mobile measurement applications, research was conducted on the smartphone as a photogrammetric measurement instrument as part of the doctoral project. The studies deal with the geometric stability of smartphone cameras regarding device-internal temperature changes and with the accuracy potential of rotation parameters measured with smartphone sensors. The results show a high, temperature-related variability of the interior orientation parameters, which is why the calibration of the camera should be carried out during the immediate measurement. The results of the sensor investigations show considerable inaccuracies when measuring rotation parameters, especially the compass angle (errors up to 90° were observed). The same applies to position parameters measured by global navigation satellite system (GNSS) receivers built into smartphones. According to the literature, positional accuracies of about 5 m are possible in best conditions. Otherwise, errors of several 10 m are to be expected. As a result, direct georeferencing of image measurements using current smartphone technology should be discouraged. In consideration of the results, the water gauging app Open Water Levels (OWL) was developed, whose methodological development and implementation constituted the core of the thesis project. OWL enables the flexible measurement of water levels via crowdsourcing without requiring additional equipment or being limited to specific river sections. Data acquisition and processing take place directly in the field, so that the water level information is immediately available. In practice, the user captures a short time-lapse sequence of a river bank with OWL, which is used to calculate a spatio-temporal texture that enables the detection of the water line. In order to translate the image measurement into 3D object space, a synthetic, photo-realistic image of the situation is created from existing 3D data of the river section to be investigated. Necessary approximations of the image orientation parameters are measured by smartphone sensors and GNSS. The assignment of camera image and synthetic image allows for the determination of the interior and exterior orientation parameters by means of space resection and finally the transfer of the image-measured 2D water line into the 3D object space to derive the prevalent water level in the reference system of the 3D data. In comparison with conventionally measured water levels, OWL reveals an accuracy potential of 2 cm on average, provided that synthetic image and camera image exhibit consistent image contents and that the water line can be reliably detected. In the present dissertation, related geometric and radiometric problems are comprehensively discussed. Furthermore, possible solutions, based on advancing developments in smartphone technology and image processing as well as the increasing availability of 3D reference data, are presented in the synthesis of the work. The app Open Water Levels, which is currently available as a beta version and has been tested on selected devices, provides a basis, which, with continuous further development, aims to achieve a final release for crowdsourcing water levels towards the establishment of new and the expansion of existing monitoring networks.Der Begriff des globalen Klimawandels ist seit Beginn des letzten Jahrzehnts allgegenwĂ€rtig. Die VerĂ€nderung des Weltklimas ist mit einer Zunahme von Starkregenereignissen verbunden, die nahezu unvorhersehbare Sturzfluten verursachen können. Folglich gewinnt die raumzeitlich hochaufgelöste Überwachung von FließgewĂ€ssern zunehmend an Bedeutung. Pegelmessstationen erfassen kontinuierlich und prĂ€zise WasserstĂ€nde, sind jedoch in Anschaffung und Wartung sehr teuer und werden vorzugsweise an wasserwirtschaftlich-relevanten GewĂ€ssern installiert. Kleinere GewĂ€sser bleiben hĂ€ufig unbeobachtet. Um die Datendichte hydrometrischer Messnetze zu erhöhen und somit die VorhersagequalitĂ€t von Hochwasserereignissen zu verbessern, sind neue, kostengĂŒnstige und flexibel einsetzbare Wasserstandsmesstechnologien erforderlich. Diese sollten sich an den Genauigkeitsanforderungen konventioneller Messsysteme orientieren und die Beobachtung von WasserstĂ€nden zu praktisch jedem Zeitpunkt, selbst an den kleinsten FlĂŒssen, ermöglichen. Ein Lösungsvorschlag ist die Entwicklung einer photogrammetrischen Smartphone-Anwendung (App) zum Crowdsourcing von WasserstĂ€nden mit welcher freiwillige Nutzer lediglich Bilder eines Flussabschnitts aufnehmen mĂŒssen, um daraus den Wasserstand zu bestimmen. Heutige Smartphones integrieren hochauflösende Kameras, eine Vielzahl von Sensoren, leistungsfĂ€hige Prozessoren und Massenspeicher. Sie sind jedoch fĂŒr den Massenmarkt konzipiert und verwenden kostengĂŒnstige Hardware, die nicht der QualitĂ€t geodĂ€tischer Messtechnik entsprechen kann. Um das Einsatzpotential in mobilen Messanwendungen zu eruieren, sind Untersuchungen zum Smartphone als photogrammetrisches Messinstrument im Rahmen des Promotionsprojekts durchgefĂŒhrt worden. Die Studien befassen sich mit der geometrischen StabilitĂ€t von Smartphone-Kameras bezĂŒglich gerĂ€teinterner TemperaturĂ€nderungen und mit dem Genauigkeitspotential von mit Smartphone-Sensoren gemessenen Rotationsparametern. Die Ergebnisse zeigen eine starke, temperaturbedingte VariabilitĂ€t der inneren Orientierungsparameter, weshalb die Kalibrierung der Kamera zum unmittelbaren Messzeitpunkt erfolgen sollte. Die Ergebnisse der Sensoruntersuchungen zeigen große Ungenauigkeiten bei der Messung der Rotationsparameter, insbesondere des Kompasswinkels (Fehler von bis zu 90° festgestellt). Selbiges gilt auch fĂŒr Positionsparameter, gemessen durch in Smartphones eingebaute EmpfĂ€nger fĂŒr Signale globaler Navigationssatellitensysteme (GNSS). Wie aus der Literatur zu entnehmen ist, lassen sich unter besten Bedingungen Lagegenauigkeiten von etwa 5 m erreichen. Abseits davon sind Fehler von mehreren 10 m zu erwarten. Infolgedessen ist von einer direkten Georeferenzierung von Bildmessungen mittels aktueller Smartphone-Technologie abzusehen. Unter BerĂŒcksichtigung der gewonnenen Erkenntnisse wurde die Pegel-App Open Water Levels (OWL) entwickelt, deren methodische Entwicklung und Implementierung den Kern der Arbeit bildete. OWL ermöglicht die flexible Messung von WasserstĂ€nden via Crowdsourcing, ohne dabei zusĂ€tzliche AusrĂŒstung zu verlangen oder auf spezifische Flussabschnitte beschrĂ€nkt zu sein. Datenaufnahme und Verarbeitung erfolgen direkt im Feld, so dass die Pegelinformationen sofort verfĂŒgbar sind. Praktisch nimmt der Anwender mit OWL eine kurze Zeitraffersequenz eines Flussufers auf, die zur Berechnung einer Raum-Zeit-Textur dient und die Erkennung der Wasserlinie ermöglicht. Zur Übersetzung der Bildmessung in den 3D-Objektraum wird aus vorhandenen 3D-Daten des zu untersuchenden Flussabschnittes ein synthetisches, photorealistisches Abbild der Aufnahmesituation erstellt. Erforderliche NĂ€herungen der Bildorientierungsparameter werden von Smartphone-Sensoren und GNSS gemessen. Die Zuordnung von Kamerabild und synthetischem Bild erlaubt die Bestimmung der inneren und Ă€ußeren Orientierungsparameter mittels rĂ€umlichen RĂŒckwĂ€rtsschnitt. Nach Rekonstruktion der Aufnahmesituation lĂ€sst sich die im Bild gemessene 2D-Wasserlinie in den 3D-Objektraum projizieren und der vorherrschende Wasserstand im Referenzsystem der 3D-Daten ableiten. Im Soll-Ist-Vergleich mit konventionell gemessenen Pegeldaten zeigt OWL ein erreichbares Genauigkeitspotential von durchschnittlich 2 cm, insofern synthetisches und reales Kamerabild einen möglichst konsistenten Bildinhalt aufweisen und die Wasserlinie zuverlĂ€ssig detektiert werden kann. In der vorliegenden Dissertation werden damit verbundene geometrische und radiometrische Probleme ausfĂŒhrlich diskutiert sowie LösungsansĂ€tze, auf der Basis fortschreitender Entwicklungen von Smartphone-Technologie und Bildverarbeitung sowie der zunehmenden VerfĂŒgbarkeit von 3D-Referenzdaten, in der Synthese der Arbeit vorgestellt. Mit der gegenwĂ€rtig als Betaversion vorliegenden und auf ausgewĂ€hlten GerĂ€ten getesteten App Open Water Levels wurde eine Basis geschaffen, die mit kontinuierlicher Weiterentwicklung eine finale Freigabe fĂŒr das Crowdsourcing von WasserstĂ€nden und damit den Aufbau neuer und die Erweiterung bestehender Monitoring-Netzwerke anstrebt

    Map-Based Localization for Unmanned Aerial Vehicle Navigation

    Get PDF
    Unmanned Aerial Vehicles (UAVs) require precise pose estimation when navigating in indoor and GNSS-denied / GNSS-degraded outdoor environments. The possibility of crashing in these environments is high, as spaces are confined, with many moving obstacles. There are many solutions for localization in GNSS-denied environments, and many different technologies are used. Common solutions involve setting up or using existing infrastructure, such as beacons, Wi-Fi, or surveyed targets. These solutions were avoided because the cost should be proportional to the number of users, not the coverage area. Heavy and expensive sensors, for example a high-end IMU, were also avoided. Given these requirements, a camera-based localization solution was selected for the sensor pose estimation. Several camera-based localization approaches were investigated. Map-based localization methods were shown to be the most efficient because they close loops using a pre-existing map, thus the amount of data and the amount of time spent collecting data are reduced as there is no need to re-observe the same areas multiple times. This dissertation proposes a solution to address the task of fully localizing a monocular camera onboard a UAV with respect to a known environment (i.e., it is assumed that a 3D model of the environment is available) for the purpose of navigation for UAVs in structured environments. Incremental map-based localization involves tracking a map through an image sequence. When the map is a 3D model, this task is referred to as model-based tracking. A by-product of the tracker is the relative 3D pose (position and orientation) between the camera and the object being tracked. State-of-the-art solutions advocate that tracking geometry is more robust than tracking image texture because edges are more invariant to changes in object appearance and lighting. However, model-based trackers have been limited to tracking small simple objects in small environments. An assessment was performed in tracking larger, more complex building models, in larger environments. A state-of-the art model-based tracker called ViSP (Visual Servoing Platform) was applied in tracking outdoor and indoor buildings using a UAVs low-cost camera. The assessment revealed weaknesses at large scales. Specifically, ViSP failed when tracking was lost, and needed to be manually re-initialized. Failure occurred when there was a lack of model features in the cameras field of view, and because of rapid camera motion. Experiments revealed that ViSP achieved positional accuracies similar to single point positioning solutions obtained from single-frequency (L1) GPS observations standard deviations around 10 metres. These errors were considered to be large, considering the geometric accuracy of the 3D model used in the experiments was 10 to 40 cm. The first contribution of this dissertation proposes to increase the performance of the localization system by combining ViSP with map-building incremental localization, also referred to as simultaneous localization and mapping (SLAM). Experimental results in both indoor and outdoor environments show sub-metre positional accuracies were achieved, while reducing the number of tracking losses throughout the image sequence. It is shown that by integrating model-based tracking with SLAM, not only does SLAM improve model tracking performance, but the model-based tracker alleviates the computational expense of SLAMs loop closing procedure to improve runtime performance. Experiments also revealed that ViSP was unable to handle occlusions when a complete 3D building model was used, resulting in large errors in its pose estimates. The second contribution of this dissertation is a novel map-based incremental localization algorithm that improves tracking performance, and increases pose estimation accuracies from ViSP. The novelty of this algorithm is the implementation of an efficient matching process that identifies corresponding linear features from the UAVs RGB image data and a large, complex, and untextured 3D model. The proposed model-based tracker improved positional accuracies from 10 m (obtained with ViSP) to 46 cm in outdoor environments, and improved from an unattainable result using VISP to 2 cm positional accuracies in large indoor environments. The main disadvantage of any incremental algorithm is that it requires the camera pose of the first frame. Initialization is often a manual process. The third contribution of this dissertation is a map-based absolute localization algorithm that automatically estimates the camera pose when no prior pose information is available. The method benefits from vertical line matching to accomplish a registration procedure of the reference model views with a set of initial input images via geometric hashing. Results demonstrate that sub-metre positional accuracies were achieved and a proposed enhancement of conventional geometric hashing produced more correct matches - 75% of the correct matches were identified, compared to 11%. Further the number of incorrect matches was reduced by 80%
    • 

    corecore