160 research outputs found

    Sky Segmentation of Fisheye Images for Identifying Non-Line-of-Sight Satellites

    Get PDF
    GNSS (global navigation satellite system) receivers are often deployed in environments where some satellite signals are blocked by buildings and other obstructions. This non-line-of-sight situation is challenging for GNSS positioning because the signals can still be received via indirect paths, which causes errors to the calculated position. Knowledge of the blocked satellites would help in mitigating these errors. A sky-pointing fisheye camera can be used to gather information of the surroundings of the receiver in order to detect non-line-of-sight situations. Using semantic segmentation of the sky, the image can be segmented into line-of-sight and non-line-of-sight regions. By projecting the satellite locations onto the image, each satellite can be classified according to the segmentation. The objective of this thesis is to study the use of neural networks in segmenting the sky from fisheye images and to classify the possibly visible satellites as line-of-sight and non-line-of-sight based on the segmentation. Several popular segmentation networks were trained and evaluated to compare their performance on the task. A manually labeled, small dataset was prepared, containing images with different weather conditions and environments, including tunnels. The results were validated on a larger test set using GNSS data. The study shows that neural networks can segment the sky from fisheye images very precisely, reaching almost 99% intersection over union and over 99% F1-score. The best-performing model was a U-Net with EfficientNetB6 encoder, but there was little difference between the tested models. The satellite classification performed after the segmentation was also accurate and in line with the signal strengths. It can be concluded on the basis of the study that fisheye sky segmentation with neural networks is an effective and useful method for line-of-sight detection

    A Review of Environmental Context Detection for Navigation Based on Multiple Sensors

    Get PDF
    Current navigation systems use multi-sensor data to improve the localization accuracy, but often without certitude on the quality of those measurements in certain situations. The context detection will enable us to build an adaptive navigation system to improve the precision and the robustness of its localization solution by anticipating possible degradation in sensor signal quality (GNSS in urban canyons for instance or camera-based navigation in a non-textured environment). That is why context detection is considered the future of navigation systems. Thus, it is important firstly to define this concept of context for navigation and to find a way to extract it from available information. This paper overviews existing GNSS and on-board vision-based solutions of environmental context detection. This review shows that most of the state-of-the art research works focus on only one type of data. It confirms that the main perspective of this problem is to combine different indicators from multiple sensors

    Segmentation d'images par combinaison adaptative couleur-texture et classification de pixels. (Applications à la caractérisation de l'environnement de réception de signaux GNSS)

    Get PDF
    En segmentation d images, les informations de couleur et de texture sont très utilisées. Le premier apport de cette thèse se situe au niveau de l utilisation conjointe de ces deux sources d informations. Nous proposons alors une méthode de combinaison couleur/texture, adaptative et non paramétrique, qui consiste à combiner un (ou plus) gradient couleur et un (ou plus) gradient texture pour ensuite générer un gradient structurel utilisé comme image de potentiel dans l algorithme de croissance de régions par LPE. L originalité de notre méthode réside dans l étude de la dispersion d un nuage de point 3D dans l espace, en utilisant une étude comparative des valeurs propres obtenues par une analyse des composantes principales de la matrice de covariance de ce nuage de points. L approche de combinaison couleur/texture proposée est d abord testée sur deux bases d images, à savoir la base générique d images couleur de BERKELEY et la base d images de texture VISTEX. Cette thèse s inscrivant dans le cadre des projets ViLoc (RFC) et CAPLOC (PREDIT), le deuxième apport de celle-ci se situe au niveau de la caractérisation de l environnement de réception des signaux GNSS pour améliorer le calcul de la position d un mobile en milieu urbain. Dans ce cadre, nous proposons d exclure certains satellites (NLOS dont les signaux sont reçus par réflexion voir totalement bloqués par les obstacles environnants) dans le calcul de la position d un mobile. Deux approches de caractérisation, basées sur le traitement d images, sont alors proposées. La première approche consiste à appliquer la méthode de combinaison couleur/texture proposée sur deux bases d images réelles acquises en mobilité, à l aide d une caméra fisheye installée sur le toit du véhicule de laboratoire, suivie d une classification binaire permettant d obtenir les deux classes d intérêt ciel (signaux LOS) et non ciel (signaux NLOS). Afin de satisfaire la contrainte temps réel exigée par le projet CAPLOC, nous avons proposé une deuxième approche basée sur une simplification de l image couplée à une classification pixellaire adaptée. Le principe d exclusion des satellites NLOS permet d améliorer la précision de la position estimée, mais uniquement lorsque les satellites LOS (dont les signaux sont reçus de manière direct) sont géométriquement bien distribués dans l espace. Dans le but de prendre en compte cette connaissance relative à la distribution des satellites, et par conséquent, améliorer la précision de localisation, nous avons proposé une nouvelle stratégie pour l estimation de position, basée sur l exclusion des satellites NLOS (identifiés par le traitement d images), conditionnée par l information DOP, contenue dans les trames GPS.Color and texture are two main information used in image segmentation. The first contribution of this thesis focuses on the joint use of color and texture information by developing a robust and non parametric method combining color and texture gradients. The proposed color/texture combination allows defining a structural gradient that is used as potential image in watershed algorithm. The originality of the proposed method consists in studying a 3D points cloud generated by color and texture descriptors, followed by an eigenvalue analysis. The color/texture combination method is firstly tested and compared with well known methods in the literature, using two databases (generic BERKELEY database of color images and the VISTEX database of texture images). The applied part of the thesis is within ViLoc project (funded by RFC regional council) and CAPLOC project (funded by PREDIT). In this framework, the second contribution of the thesis concerns the characterization of the environment of GNSS signals reception. In this part, we aim to improve estimated position of a mobile in urban environment by excluding NLOS satellites (for which the signal is masked or received after reflections on obstacles surrounding the antenna environment). For that, we propose two approaches to characterize the environment of GNSS signals reception using image processing. The first one consists in applying the proposed color/texture combination on images acquired in mobility with a fisheye camera located on the roof of a vehicle and oriented toward the sky. The segmentation step is followed by a binary classification to extract two classes sky (LOS signals) and not sky (NLOS signals). The second approach is proposed in order to satisfy the real-time constraint required by the application. This approach is based on image simplification and adaptive pixel classification. The NLOS satellites exclusion principle is interesting, in terms of improving precision of position, when the LOS satellites (for which the signals are received directly) are well geometrically distributed in space. To take into account the knowledge of satellite distribution and then increase the precision of position, we propose a new strategy of position estimation, based on the exclusion of NLOS satellites (identified by the image processing step), conditioned by DOP information, which is provided by GPS data.BELFORT-UTBM-SEVENANS (900942101) / SudocSudocFranceF

    On the use of smartphones as novel photogrammetric water gauging instruments: Developing tools for crowdsourcing water levels

    Get PDF
    The term global climate change is omnipresent since the beginning of the last decade. Changes in the global climate are associated with an increase in heavy rainfalls that can cause nearly unpredictable flash floods. Consequently, spatio-temporally high-resolution monitoring of rivers becomes increasingly important. Water gauging stations continuously and precisely measure water levels. However, they are rather expensive in purchase and maintenance and are preferably installed at water bodies relevant for water management. Small-scale catchments remain often ungauged. In order to increase the data density of hydrometric monitoring networks and thus to improve the prediction quality of flood events, new, flexible and cost-effective water level measurement technologies are required. They should be oriented towards the accuracy requirements of conventional measurement systems and facilitate the observation of water levels at virtually any time, even at the smallest rivers. A possible solution is the development of a photogrammetric smartphone application (app) for crowdsourcing water levels, which merely requires voluntary users to take pictures of a river section to determine the water level. Today’s smartphones integrate high-resolution cameras, a variety of sensors, powerful processors, and mass storage. However, they are designed for the mass market and use low-cost hardware that cannot comply with the quality of geodetic measurement technology. In order to investigate the potential for mobile measurement applications, research was conducted on the smartphone as a photogrammetric measurement instrument as part of the doctoral project. The studies deal with the geometric stability of smartphone cameras regarding device-internal temperature changes and with the accuracy potential of rotation parameters measured with smartphone sensors. The results show a high, temperature-related variability of the interior orientation parameters, which is why the calibration of the camera should be carried out during the immediate measurement. The results of the sensor investigations show considerable inaccuracies when measuring rotation parameters, especially the compass angle (errors up to 90° were observed). The same applies to position parameters measured by global navigation satellite system (GNSS) receivers built into smartphones. According to the literature, positional accuracies of about 5 m are possible in best conditions. Otherwise, errors of several 10 m are to be expected. As a result, direct georeferencing of image measurements using current smartphone technology should be discouraged. In consideration of the results, the water gauging app Open Water Levels (OWL) was developed, whose methodological development and implementation constituted the core of the thesis project. OWL enables the flexible measurement of water levels via crowdsourcing without requiring additional equipment or being limited to specific river sections. Data acquisition and processing take place directly in the field, so that the water level information is immediately available. In practice, the user captures a short time-lapse sequence of a river bank with OWL, which is used to calculate a spatio-temporal texture that enables the detection of the water line. In order to translate the image measurement into 3D object space, a synthetic, photo-realistic image of the situation is created from existing 3D data of the river section to be investigated. Necessary approximations of the image orientation parameters are measured by smartphone sensors and GNSS. The assignment of camera image and synthetic image allows for the determination of the interior and exterior orientation parameters by means of space resection and finally the transfer of the image-measured 2D water line into the 3D object space to derive the prevalent water level in the reference system of the 3D data. In comparison with conventionally measured water levels, OWL reveals an accuracy potential of 2 cm on average, provided that synthetic image and camera image exhibit consistent image contents and that the water line can be reliably detected. In the present dissertation, related geometric and radiometric problems are comprehensively discussed. Furthermore, possible solutions, based on advancing developments in smartphone technology and image processing as well as the increasing availability of 3D reference data, are presented in the synthesis of the work. The app Open Water Levels, which is currently available as a beta version and has been tested on selected devices, provides a basis, which, with continuous further development, aims to achieve a final release for crowdsourcing water levels towards the establishment of new and the expansion of existing monitoring networks.Der Begriff des globalen Klimawandels ist seit Beginn des letzten Jahrzehnts allgegenwärtig. Die Veränderung des Weltklimas ist mit einer Zunahme von Starkregenereignissen verbunden, die nahezu unvorhersehbare Sturzfluten verursachen können. Folglich gewinnt die raumzeitlich hochaufgelöste Überwachung von Fließgewässern zunehmend an Bedeutung. Pegelmessstationen erfassen kontinuierlich und präzise Wasserstände, sind jedoch in Anschaffung und Wartung sehr teuer und werden vorzugsweise an wasserwirtschaftlich-relevanten Gewässern installiert. Kleinere Gewässer bleiben häufig unbeobachtet. Um die Datendichte hydrometrischer Messnetze zu erhöhen und somit die Vorhersagequalität von Hochwasserereignissen zu verbessern, sind neue, kostengünstige und flexibel einsetzbare Wasserstandsmesstechnologien erforderlich. Diese sollten sich an den Genauigkeitsanforderungen konventioneller Messsysteme orientieren und die Beobachtung von Wasserständen zu praktisch jedem Zeitpunkt, selbst an den kleinsten Flüssen, ermöglichen. Ein Lösungsvorschlag ist die Entwicklung einer photogrammetrischen Smartphone-Anwendung (App) zum Crowdsourcing von Wasserständen mit welcher freiwillige Nutzer lediglich Bilder eines Flussabschnitts aufnehmen müssen, um daraus den Wasserstand zu bestimmen. Heutige Smartphones integrieren hochauflösende Kameras, eine Vielzahl von Sensoren, leistungsfähige Prozessoren und Massenspeicher. Sie sind jedoch für den Massenmarkt konzipiert und verwenden kostengünstige Hardware, die nicht der Qualität geodätischer Messtechnik entsprechen kann. Um das Einsatzpotential in mobilen Messanwendungen zu eruieren, sind Untersuchungen zum Smartphone als photogrammetrisches Messinstrument im Rahmen des Promotionsprojekts durchgeführt worden. Die Studien befassen sich mit der geometrischen Stabilität von Smartphone-Kameras bezüglich geräteinterner Temperaturänderungen und mit dem Genauigkeitspotential von mit Smartphone-Sensoren gemessenen Rotationsparametern. Die Ergebnisse zeigen eine starke, temperaturbedingte Variabilität der inneren Orientierungsparameter, weshalb die Kalibrierung der Kamera zum unmittelbaren Messzeitpunkt erfolgen sollte. Die Ergebnisse der Sensoruntersuchungen zeigen große Ungenauigkeiten bei der Messung der Rotationsparameter, insbesondere des Kompasswinkels (Fehler von bis zu 90° festgestellt). Selbiges gilt auch für Positionsparameter, gemessen durch in Smartphones eingebaute Empfänger für Signale globaler Navigationssatellitensysteme (GNSS). Wie aus der Literatur zu entnehmen ist, lassen sich unter besten Bedingungen Lagegenauigkeiten von etwa 5 m erreichen. Abseits davon sind Fehler von mehreren 10 m zu erwarten. Infolgedessen ist von einer direkten Georeferenzierung von Bildmessungen mittels aktueller Smartphone-Technologie abzusehen. Unter Berücksichtigung der gewonnenen Erkenntnisse wurde die Pegel-App Open Water Levels (OWL) entwickelt, deren methodische Entwicklung und Implementierung den Kern der Arbeit bildete. OWL ermöglicht die flexible Messung von Wasserständen via Crowdsourcing, ohne dabei zusätzliche Ausrüstung zu verlangen oder auf spezifische Flussabschnitte beschränkt zu sein. Datenaufnahme und Verarbeitung erfolgen direkt im Feld, so dass die Pegelinformationen sofort verfügbar sind. Praktisch nimmt der Anwender mit OWL eine kurze Zeitraffersequenz eines Flussufers auf, die zur Berechnung einer Raum-Zeit-Textur dient und die Erkennung der Wasserlinie ermöglicht. Zur Übersetzung der Bildmessung in den 3D-Objektraum wird aus vorhandenen 3D-Daten des zu untersuchenden Flussabschnittes ein synthetisches, photorealistisches Abbild der Aufnahmesituation erstellt. Erforderliche Näherungen der Bildorientierungsparameter werden von Smartphone-Sensoren und GNSS gemessen. Die Zuordnung von Kamerabild und synthetischem Bild erlaubt die Bestimmung der inneren und äußeren Orientierungsparameter mittels räumlichen Rückwärtsschnitt. Nach Rekonstruktion der Aufnahmesituation lässt sich die im Bild gemessene 2D-Wasserlinie in den 3D-Objektraum projizieren und der vorherrschende Wasserstand im Referenzsystem der 3D-Daten ableiten. Im Soll-Ist-Vergleich mit konventionell gemessenen Pegeldaten zeigt OWL ein erreichbares Genauigkeitspotential von durchschnittlich 2 cm, insofern synthetisches und reales Kamerabild einen möglichst konsistenten Bildinhalt aufweisen und die Wasserlinie zuverlässig detektiert werden kann. In der vorliegenden Dissertation werden damit verbundene geometrische und radiometrische Probleme ausführlich diskutiert sowie Lösungsansätze, auf der Basis fortschreitender Entwicklungen von Smartphone-Technologie und Bildverarbeitung sowie der zunehmenden Verfügbarkeit von 3D-Referenzdaten, in der Synthese der Arbeit vorgestellt. Mit der gegenwärtig als Betaversion vorliegenden und auf ausgewählten Geräten getesteten App Open Water Levels wurde eine Basis geschaffen, die mit kontinuierlicher Weiterentwicklung eine finale Freigabe für das Crowdsourcing von Wasserständen und damit den Aufbau neuer und die Erweiterung bestehender Monitoring-Netzwerke anstrebt

    On Simultaneous Localization and Mapping inside the Human Body (Body-SLAM)

    Get PDF
    Wireless capsule endoscopy (WCE) offers a patient-friendly, non-invasive and painless investigation of the entire small intestine, where other conventional wired endoscopic instruments can barely reach. As a critical component of the capsule endoscopic examination, physicians need to know the precise position of the endoscopic capsule in order to identify the position of intestinal disease after it is detected by the video source. To define the position of the endoscopic capsule, we need to have a map of inside the human body. However, since the shape of the small intestine is extremely complex and the RF signal propagates differently in the non-homogeneous body tissues, accurate mapping and localization inside small intestine is very challenging. In this dissertation, we present an in-body simultaneous localization and mapping technique (Body-SLAM) to enhance the positioning accuracy of the WCE inside the small intestine and reconstruct the trajectory the capsule has traveled. In this way, the positions of the intestinal diseases can be accurately located on the map of inside human body, therefore, facilitates the following up therapeutic operations. The proposed approach takes advantage of data fusion from two sources that come with the WCE: image sequences captured by the WCE\u27s embedded camera and the RF signal emitted by the capsule. This approach estimates the speed and orientation of the endoscopic capsule by analyzing displacements of feature points between consecutive images. Then, it integrates this motion information with the RF measurements by employing a Kalman filter to smooth the localization results and generate the route that the WCE has traveled. The performance of the proposed motion tracking algorithm is validated using empirical data from the patients and this motion model is later imported into a virtual testbed to test the performance of the alternative Body-SLAM algorithms. Experimental results show that the proposed Body-SLAM technique is able to provide accurate tracking of the WCE with average error of less than 2.3cm

    Signals in the Soil: Subsurface Sensing

    Get PDF
    In this chapter, novel subsurface soil sensing approaches are presented for monitoring and real-time decision support system applications. The methods, materials, and operational feasibility aspects of soil sensors are explored. The soil sensing techniques covered in this chapter include aerial sensing, in-situ, proximal sensing, and remote sensing. The underlying mechanism used for sensing is also examined as well. The sensor selection and calibration techniques are described in detail. The chapter concludes with discussion of soil sensing challenges

    UAVs for the Environmental Sciences

    Get PDF
    This book gives an overview of the usage of UAVs in environmental sciences covering technical basics, data acquisition with different sensors, data processing schemes and illustrating various examples of application

    Remote Sensing

    Get PDF
    This dual conception of remote sensing brought us to the idea of preparing two different books; in addition to the first book which displays recent advances in remote sensing applications, this book is devoted to new techniques for data processing, sensors and platforms. We do not intend this book to cover all aspects of remote sensing techniques and platforms, since it would be an impossible task for a single volume. Instead, we have collected a number of high-quality, original and representative contributions in those areas
    • …
    corecore