88 research outputs found

    PHROG: A Multimodal Feature for Place Recognition

    Get PDF
    International audienceLong-term place recognition in outdoor environments remains a challenge due to high appearance changes in the environment. The problem becomes even more difficult when the matching between two scenes has to be made with information coming from different visual sources, particularly with different spectral ranges. For instance, an infrared camera is helpful for night vision in combination with a visible camera. In this paper, we emphasize our work on testing usual feature point extractors under both constraints: repeatability across spectral ranges and long-term appearance. We develop a new feature extraction method dedicated to improve the repeatability across spectral ranges. We conduct an evaluation of feature robustness on long-term datasets coming from different imaging sources (optics, sensors size and spectral ranges) with a Bag-of-Words approach. The tests we perform demonstrate that our method brings a significant improvement on the image retrieval issue in a visual place recognition context, particularly when there is a need to associate images from various spectral ranges such as infrared and visible: we have evaluated our approach using visible, Near InfraRed (NIR), Short Wavelength InfraRed (SWIR) and Long Wavelength InfraRed (LWIR)

    Spacecraft Position Estimation and Attitude Determination using Terrestrial Illumination Matching

    Get PDF
    An algorithm to conduct spacecraft position estimation and attitude determination via terrestrial illumination matching (TIM) is presented consisting of a novel method that uses terrestrial lights as a surrogate for star fields. Although star sensors represent a highly accurate means of attitude determination with considerable spaceflight heritage, with Global Positioning System (GPS) providing position, TIM provides a potentially viable alternative in the event of star sensor or GPS malfunction or performance degradation. The research defines a catalog of terrestrial light constellations, which are then implemented within the TIM algorithm for position acquisition of a generic spacecraft bus. With the algorithm relying on terrestrial lights rather than the established standard of star fields, a series of sensitivity studies are showcased to determine performance during specified operating constraints, to include varying orbital altitude and cloud cover conditions. The pose is recovered from the matching techniques by solving the epipolar constraint equation using the Essential and Fundamental matrix, and point-to-point projection using the Homography matrix. This is used to obtain relative position change and the spacecraft\u27s attitude when there is a measurement. When there is not, both an extended and an unscented Kalman filter are applied to test continuous operation between measurements. The research is operationally promising for use with each nighttime pass, but filtering is not enough to sustain orbit determination during daytime operations

    ASSESSMENT OF ELECTRO-OPTICAL IMAGING TECHNOLOGY FOR UNMANNED AERIAL SYSTEM NAVIGATION IN A GPS-DENIED ENVIRONMENT

    Get PDF
    Navigation systems of unmanned aircraft systems (UAS) are heavily dependent on the availability of Global Positioning Systems (GPS) or other Global Navigation Satellite Systems (GNSS). Although inertial navigation systems (INS) can provide position and velocity of an aircraft based on acceleration measurements, the information degrades over time and reduces the capability of the system. In a GPS-denied environment, a UAS must utilize alternative sensor sources for navigating. This thesis presents preliminary evaluation results on the usage of onboard down-looking electro-optical sensors and image matching techniques to assist in GPS-free navigation of aerial platforms. Following the presentation of the fundamental mathematics behind the proposed concept, the thesis analyzes the key results from three flight campaign experiments that use different sets of sensors to collect data. Each of the flight experiments explores different sensor setups, assesses a variety of image processing methods, looks at different terrain environments, and reveals limitations related to the proposed approach. In addition, an attempt to incorporate navigational aid solutions into a navigation system using a Kalman filter is demonstrated. The thesis concludes with recommendations for future research on developing an integrated navigation system that relies on inertial measurement unit data complemented by the positional fixes from the image-matching technique.Outstanding ThesisCivilian, DSO National Laboratories, SingaporeApproved for public release. Distribution is unlimited

    Advanced Location-Based Technologies and Services

    Get PDF
    Since the publication of the first edition in 2004, advances in mobile devices, positioning sensors, WiFi fingerprinting, and wireless communications, among others, have paved the way for developing new and advanced location-based services (LBSs). This second edition provides up-to-date information on LBSs, including WiFi fingerprinting, mobile computing, geospatial clouds, geospatial data mining, location privacy, and location-based social networking. It also includes new chapters on application areas such as LBSs for public health, indoor navigation, and advertising. In addition, the chapter on remote sensing has been revised to address advancements

    Digital Multispectral Map Reconstruction Using Aerial Imagery

    Get PDF
    Advances made in the computer vision field allowed for the establishment of faster and more accurate photogrammetry techniques. Structure from Motion(SfM) is a photogrammetric technique focused on the digital spatial reconstruction of objects based on a sequence of images. The benefit of Unmanned Aerial Vehicle (UAV) platforms allowed the ability to acquire high fidelity imagery intended for environmental mapping. This way, UAV platforms became a heavily adopted method of survey. The combination of SfM and the recent improvements of Unmanned Aerial Vehicle (UAV) platforms granted greater flexibility and applicability, opening a new path for a new remote sensing technique aimed to replace more traditional and laborious approaches often associated with high monetary costs. The continued development of digital reconstruction software and advances in the field of computer processing allowed for a more affordable and higher resolution solution when compared to the traditional methods. The present work proposed a digital reconstruction algorithm based on images taken by a UAV platform inspired by the work made available by the open-source project OpenDroneMap. The aerial images are inserted in the computer vision program and several operations are applied to them, including detection and matching of features, point cloud reconstruction, meshing, and texturing, which results in a final product that represents the surveyed site. Additionally, from the study, it was concluded that an implementation which addresses the processing of thermal images was not integrated in the works of OpenDroneMap. By this point, their work was altered to allow for the reconstruction of thermal maps without sacrificing the resolution of the final model. Standard methods to process thermal images required a larger image footprint (or area of ground capture in a frame), the reason for this is that these types of images lack the presence of invariable features and by increasing the image’s footprint, the number of features present in each frame also rises. However, this method of image capture results in a lower resolution of the final product. The algorithm was developed using open-source libraries. In order to validate the obtained results, this model was compared to data obtained from commercial products, like Pix4D. Furthermore, due to circumstances brought about by the current pandemic, it was not possible to conduct a field study for the comparison and assessment of our results, as such the validation of the models was performed by verifying if the geographic location of the model was performed correctly and by visually assessing the generated maps.Avanços no campo da visĂŁo computacional permitiu o desenvolvimento de algoritmos mais eficientes de fotogrametria. Structure from Motion (SfM) Ă© uma tĂ©cnica de fotogrametria que tem como objetivo a reconstrução digital de objectos no espaço derivados de uma sequĂȘncia de imagens. A caracterĂ­stica importante que os VeĂ­culos AĂ©rios nĂŁo-tripulados (UAV) conseguem fornecer, a nĂ­vel de mapeamento, Ă© a sua capacidade de obter um conjunto de imagens de alta resolução. Devido a isto, UAV tornaram-se num dos mĂ©todos adotados no estudo de topografia. A combinação entre SfM e recentes avanços nos UAV permitiram uma melhor flexibilidade e aplicabilidade, permitindo deste modo desenvolver um novo mĂ©todo de Remote Sensing. Este mĂ©todo pretende substituir tĂ©cnicas tradicionais, as quais estĂŁo associadas a mĂŁo-de-obra intensiva e a custos monetĂĄrios elevados. Avanços contĂ­nuos feitos em softwares de reconstrução digital e no poder de processamento resultou em modelos de maior resolução e menos dispendiosos comparando a mĂ©todos tradicionais. O presente estudo propĂ”e um algoritmo de reconstrução digital baseado em imagens obtidas atravĂ©s de UAV inspiradas no estudo disponibilizado pela OpenDroneMap. Estas imagens sĂŁo inseridas no programa de visĂŁo computacional, onde vĂĄrias operaçÔes sĂŁo realizadas, incluindo: deteção e correspondĂȘncia de caracteristicas, geração da point cloud, meshing e texturação dos quais resulta o produto final que representa o local em estudo. De forma complementar, concluiu-se que o trabalho da OpenDroneMap nĂŁo incluia um processo de tratamento de imagens tĂ©rmicas. Desta forma, alteraçÔes foram efetuadas que permitissem a criação de mapas tĂ©rmicos sem sacrificar resolução do produto final, pois mĂ©todos tĂ­picos para processamento de imagens tĂ©rmicas requerem uma ĂĄrea de captura maior, devido Ă  falta de caracterĂ­sticas invariantes neste tipo de imagens, o que leva a uma redução de resolução. Desta forma, o programa proposto foi desenvolvido atravĂ©s de bibliotecas open-source e os resultados foram comparados com modelos gerados atravĂ©s de software comerciais. AlĂ©m do mais, devido Ă  situação pandĂ©mica atual, nĂŁo foi possĂ­vel efetuar um estudo de campo para validar os modelos obtidos, como tal esta verificação foi feita atravĂ©s da correta localização geogrĂĄfica do modelo, bem como avaliação visual dos modelos criados

    PPMExplorer: Using Information Retrieval, Computer Vision and Transfer Learning Methods to Index and Explore Images of Pompeii

    Get PDF
    In this dissertation, we present and analyze the technology used in the making of PPMExplorer: Search, Find, and Explore Pompeii. PPMExplorer is a software tool made with data extracted from the Pompei: Pitture e Mosaic (PPM) volumes. PPM is a valuable set of volumes containing 20,000 historical annotated images of the archaeological site of Pompeii, Italy accompanied by extensive captions. We transformed the volumes from paper, to digital, to searchable. PPMExplorer enables archaeologist researchers to conduct and check hypotheses on historical findings. We present a theory that such a concept is possible by leveraging computer generated correlations between artifacts using image data, text data, and a combination of both. The acquisition and interconnection of the data are proposed and executed using image processing, natural language processing, data mining, and machine learning methods

    Robust and Accurate Camera Localisation at a Large Scale

    Get PDF
    The task of camera-based localization aims to quickly and precisely pinpoint at which location (and viewing direction) the image was taken, against a pre-stored large-scale map of the environment. This technique can be used in many 3D computer vision applications, e.g., AR/VR and autonomous driving. Mapping the world is the first step to enable camera-based localization since a pre-stored map serves as a reference for a query image/sequence. In this thesis, we exploit three readily available sources: (i) satellite images; (ii) ground-view images; (iii) 3D points cloud. Based on the above three sources, we propose solutions to localize a query camera both effectively and efficiently, i.e., accurately localizing a query camera under a variety of lighting and viewing conditions within a small amount of time. The main contributions are summarized as follows. In chapter 3, we separately present a minimal 4-point and 2-point solver to estimate a relative and absolute camera pose. The core idea is exploiting the vertical direction from IMU or vanishing point to derive a closed-form solution of a quartic equation and a quadratic equation for the relative and absolute camera pose, respectively. In chapter 4, we localize a ground-view query image against a satellite map. Inspired by the insight that humans commonly use orientation information as an important cue for spatial localization, we propose a method that endows deep neural networks with the 'commonsense' of orientation. We design a Siamese network that explicitly encodes each pixel's orientation of the ground-view and satellite images. Our method boosts the learned deep features' discriminative power, outperforming all previous methods. In chapter 5, we localize a ground-view query image against a ground-view image database. We propose a representation learning method having higher location-discriminating power. The core idea is learning discriminative image embedding. Similarities among intra-place images (viewing the same landmarks) are maximized while similarities among inter-place images (viewing different landmarks) are minimized. The method is easy to implement and pluggable into any CNN. Experiments show that our method outperforms all previous methods. In chapter 6, we localize a ground-view query image against a large-scale 3D points cloud with visual descriptors. To address the ambiguities in direct 2D--3D feature matching, we introduce a global matching method that harnesses global contextual information exhibited both within the query image and among all the 3D points in the map. The core idea is to find the optimal 2D set to 3D set matching. Tests on standard benchmark datasets show the effectiveness of our method. In chapter 7, we localize a ground-view query image against a 3D points cloud with only coordinates. The problem is also known as blind Perspective-n-Point. We propose a deep CNN model that simultaneously solves for both the 6-DoF absolute camera pose and 2D--3D correspondences. The core idea is extracting point-wise 2D and 3D features from their coordinates and matching 2D and 3D features effectively in a global feature matching module. Extensive tests on both real and simulated data have shown that our method substantially outperforms existing approaches. Last, in chapter 8, we study the potential of using 3D lines. Specifically, we study the problem of aligning two partially overlapping 3D line reconstructions in Euclidean space. This technique can be used for localization with respect to a 3D line database when query 3D line reconstructions are available (e.g., from stereo triangulation). We propose a neural network, taking Pluecker representations of lines as input, and solving for line-to-line matches and estimate a 6-DoF rigid transformation. Experiments on indoor and outdoor datasets show that our method's registration (rotation and translation) precision outperforms baselines significantly

    Unsupervised object candidate discovery for activity recognition

    Get PDF
    Die automatische Interpretation menschlicher BewegungsablĂ€ufe auf Basis von Videos ist ein wichtiger Bestandteil vieler Anwendungen im Bereich des Maschinellen Sehens, wie zum Beispiel Mensch-Roboter Interaktion, VideoĂŒberwachung, und inhaltsbasierte Analyse von Multimedia Daten. Anders als die meisten AnsĂ€tze auf diesem Gebiet, die hauptsĂ€chlich auf die Klassifikation von einfachen Aktionen, wie Aufstehen, oder Gehen ausgerichtet sind, liegt der Schwerpunkt dieser Arbeit auf der Erkennung menschlicher AktivitĂ€ten, d.h. komplexer Aktionssequenzen, die meist Interaktionen des Menschen mit Objekten beinhalten. GemĂ€ĂŸ der Aktionsidentifikationstheorie leiten menschliche AktivitĂ€ten ihre Bedeutung nicht nur von den involvierten Bewegungsmustern ab, sondern vor allem vom generellen Kontext, in dem sie stattfinden. Zu diesen kontextuellen Informationen gehören unter anderem die Gesamtheit aller vorher furchgefĂŒhrter Aktionen, der Ort an dem sich die aktive Person befindet, sowie die Menge der Objekte, die von ihr manipuliert werden. Es ist zum Beispiel nicht möglich auf alleiniger Basis von Bewegungsmustern und ohne jeglicher Miteinbeziehung von Objektwissen zu entschieden ob eine Person, die ihre Hand zum Mund fĂŒhrt gerade etwas isst oder trinkt, raucht, oder bloß die Lippen abwischt. Die meisten Arbeiten auf dem Gebiet der computergestĂŒtzten Aktons- und AktivitĂ€tserkennung ignorieren allerdings jegliche durch den Kontext bedingte Informationen und beschrĂ€nken sich auf die Identifikation menschlicher AktivitĂ€ten auf Basis der beobachteten Bewegung. Wird jedoch Objektwissen fĂŒr die Klassifikation miteinbezogen, so geschieht dies meist unter Zuhilfenahme von ĂŒberwachten Detektoren, fĂŒr deren Einrichtung widerum eine erhebliche Menge an Trainingsdaten erforderlich ist. Bedingt durch die hohen zeitlichen Kosten, die die Annotation dieser Trainingsdaten mit sich bringt, wird das Erweitern solcher Systeme, zum Beispiel durch das HinzufĂŒgen neuer Typen von Aktionen, zum eigentlichen Flaschenhals. Ein weiterer Nachteil des Hinzuziehens von ĂŒberwacht trainierten Objektdetektoren, ist deren FehleranfĂ€lligkeit, selbst wenn die verwendeten Algorithmen dem neuesten Stand der Technik entsprechen. Basierend auf dieser Beobachtung ist das Ziel dieser Arbeit die LeistungsfĂ€higkeit computergestĂŒtzter AktivitĂ€tserkennung zu verbessern mit Hilfe der Hinzunahme von Objektwissen, welches im Gegensatz zu den bisherigen AnsĂ€tzen ohne ĂŒberwachten Trainings gewonnen werden kann. Wir Menschen haben die bemerkenswerte FĂ€higkeit selektiv die Aufmerksamkeit auf bestimmte Regionen im Blickfeld zu fokussieren und gleichzeitig nicht relevante Regionen auszublenden. Dieser kognitive Prozess erlaubt es uns unsere beschrĂ€nkten Bewusstseinsressourcen unbewusst auf Inhalte zu richten, die anschließend durch das Gehirn ausgewertet werden. Zum Beispiel zur Interpretation visueller Muster als Objekte eines bestimmten Typs. Die Regionen im Blickfeld, die unsere Aufmerksamkeit unbewusst anziehen werden als Proto-Objekte bezeichnet. Sie sind definiert als unbestimmte Teile des visuellen Informationsspektrums, die zu einem spĂ€teren Zeitpunkt durch den Menschen als tatsĂ€chliche Objekte wahrgenommen werden können, wenn er seine Aufmerksamkeit auf diese richtet. Einfacher ausgedrĂŒckt: Proto-Objekte sind Kandidaten fĂŒr Objekte, oder deren Bestandteile, die zwar lokalisiert aber noch nicht identifiziert wurden. Angeregt durch die menschliche FĂ€higkeit solche visuell hervorstechenden (salienten) Regionen zuverlĂ€ssig vom Hintergrund zu unterscheiden, haben viele Wissenschaftler Methoden entwickelt, die es erlauben Proto-Objekte zu lokalisieren. Allen diesen Algorithmen ist gemein, dass möglichst wenig statistisches Wissens ĂŒber tatsĂ€chliche Objekte vorausgesetzt wird. Visuelle Aufmerksamkeit und Objekterkennung sind sehr eng miteinander vernkĂŒpfte Prozesse im visuellen System des Menschen. Aus diesem Grund herrscht auf dem Gebiet des Maschinellen Sehens ein reges Interesse an der Integration beider Konzepte zur Erhöhung der Leistung aktueller Bilderkennungssysteme. Die im Rahmen dieser Arbeit entwickelten Methoden gehen in eine Ă€hnliche Richtung: wir demonstrieren, dass die Lokalisation von Proto-Objekten es erlaubt Objektkandidaten zu finden, die geeignet sind als zusĂ€tzliche ModalitĂ€t zu dienen fĂŒr die bewegungsbasierte Erkennung menschlicher AktivitĂ€ten. Die Grundlage dieser Arbeit bildet dabei ein sehr effizienter Algorithmus, der die visuelle Salienz mit Hilfe von quaternionenbasierten DCT Bildsignaturen approximiert. Zur Extraktion einer Menge geeigneter Objektkandidaten (d.h. Proto-Objekten) aus den resultierenden Salienzkarten, haben wir eine Methode entwickelt, die den kognitiven Mechanismus des Inhibition of Return implementiert. Die auf diese Weise gewonnenen Objektkandidaten nutzen wir anschliessend in Kombination mit state-of-the-art Bag-of-Words Methoden zur Merkmalsbeschreibung von Bewegungsmustern um komplexe AktivitĂ€ten des tĂ€glichen Lebens zu klassifizieren. Wir evaluieren das im Rahmen dieser Arbeit entwickelte System auf diversen hĂ€ufig genutzten Benchmark-DatensĂ€tzen und zeigen experimentell, dass das Miteinbeziehen von Proto-Objekten fĂŒr die AktivitĂ€tserkennung zu einer erheblichen Leistungssteigerung fĂŒhrt im Vergleich zu rein bewegungsbasierten AnsĂ€tzen. Zudem demonstrieren wir, dass das vorgestellte System bei der Erkennung menschlicher AktivitĂ€ten deutlich weniger Fehler macht als eine Vielzahl von Methoden, die dem aktuellen Stand der Technik entsprechen. Überraschenderweise ĂŒbertrifft unser System leistungsmĂ€ĂŸig sogar Verfahren, die auf Objektwissen aufbauen, welches von ĂŒberwacht trainierten Detektoren, oder manuell erstellten Annotationen stammt. Benchmark-DatensĂ€tze sind ein sehr wichtiges Mittel zum quantitativen Vergleich von computergestĂŒtzten Mustererkennungsverfahren. Nach einer ÜberprĂŒfung aller öffentlich verfĂŒgbaren, relevanten Benchmarks, haben wir jedoch festgestellt, dass keiner davon geeignet war fĂŒr eine detaillierte Evaluation von Methoden zur Erkennung komplexer, menschlicher AktivitĂ€ten. Aus diesem Grund bestand ein Teil dieser Arbeit aus der Konzeption und Aufnahme eines solchen Datensatzes, des KIT Robo-kitchen Benchmarks. Wie der Name vermuten lĂ€sst haben wir uns dabei fĂŒr ein KĂŒchenszenario entschieden, da es ermöglicht einen großen Umfang an AktivitĂ€ten des tĂ€glichen Lebens einzufangen, von denen viele Objektmanipulationen enthalten. Um eine möglichst umfangreiche Menge natĂŒrlicher Bewegungen zu erhalten, wurden die Teilnehmer wĂ€hrend der Aufnahmen kaum eingeschrĂ€nkt in der Art und Weise wie die diversen AktivitĂ€ten auszufĂŒhren sind. Zu diesem Zweck haben wir den Probanden nur die Art der auszufĂŒhrenden AktivitĂ€t mitgeteilt, sowie wo die benötigten GegenstĂ€nde zu finden sind, und ob die jeweilige TĂ€tigkeit am KĂŒchentisch oder auf der Arbeitsplatte auszufĂŒhren ist. Dies hebt KIT Robo-kitchen deutlich hervor gegenĂŒber den meisten existierenden DatensĂ€tzen, die sehr unrealistisch gespielte AktivitĂ€ten enthalten, welche unter Laborbedingungen aufgenommen wurden. Seit seiner Veröffentlichung wurde der resultierende Benchmark mehrfach verwendet zur Evaluation von Algorithmen, die darauf abzielen lang andauerne, realistische, komplexe, und quasi-periodische menschliche AktivitĂ€ten zu erkennen

    Development of high-precision snow mapping tools for Arctic environments

    Get PDF
    Le manteau neigeux varie grandement dans le temps et l’espace, il faut donc de nombreux points d’observation pour le dĂ©crire prĂ©cisĂ©ment et ponctuellement, ce qui permet de valider et d’amĂ©liorer la modĂ©lisation de la neige et les applications en tĂ©lĂ©dĂ©tection. L’analyse traditionnelle par des coupes de neige dĂ©voile des dĂ©tails pointus sur l’état de la neige Ă  un endroit et un moment prĂ©cis, mais est une mĂ©thode chronophage Ă  laquelle la distribution dans le temps et l’espace font dĂ©faut. À l’opposĂ© sur la fourchette de la prĂ©cision, on retrouve les solutions orbitales qui couvrent la surface de la Terre Ă  intervalles rĂ©guliers, mais Ă  plus faible rĂ©solution. Dans l’optique de recueillir efficacement des donnĂ©es spatiales sur la neige durant les campagnes de terrain, nous avons dĂ©veloppĂ© sur mesure un systĂšme d’aĂ©ronef tĂ©lĂ©pilotĂ© (RPAS) qui fournit des cartes d’épaisseur de neige pour quelques centaines de mĂštres carrĂ©s, selon la mĂ©thode Structure from motion (SfM). Notre RPAS peut voler dans des tempĂ©ratures extrĂȘmement froides, au contraire des autres systĂšmes sur le marchĂ©. Il atteint une rĂ©solution horizontale de 6 cm et un Ă©cart-type d’épaisseur de neige de 39 % sans vĂ©gĂ©tation (48,5 % avec vĂ©gĂ©tation). Comme la mĂ©thode SfM ne permet pas de distinguer les diffĂ©rentes couches de neige, j’ai dĂ©veloppĂ© un algorithme pour un radar Ă  onde continue Ă  modulation de frĂ©quence (FM-CW) qui permet de distinguer les deux couches principales de neige que l’on retrouve rĂ©guliĂšrement en Arctique : le givre de profondeur et la plaque Ă  vent. Les distinguer est crucial puisque les caractĂ©ristiques diffĂ©rentes des couches de neige font varier la quantitĂ© d’eau disponible pour l’écosystĂšme lors de la fonte. Selon les conditions sur place, le radar arrive Ă  estimer l’épaisseur de neige selon un Ă©cart-type entre 13 et 39 %. vii Finalement, j’ai Ă©quipĂ© le radar d’un systĂšme de gĂ©olocalisation Ă  haute prĂ©cision. Ainsi Ă©quipĂ©, le radar a une marge d’erreur de gĂ©olocalisation d’en moyenne <5 cm. À partir de la mesure radar, on peut dĂ©duire la distance entre le haut et le bas du manteau neigeux. En plus de l’épaisseur de neige, on obtient Ă©galement des points de donnĂ©es qui permettent d’interpoler un modĂšle d’élĂ©vation de la surface solide sous-jacente. J’ai utilisĂ© la mĂ©thode de structure triangulaire (TIN) pour toutes les interpolations. Le systĂšme offre beaucoup de flexibilitĂ© puisqu’il peut ĂȘtre installĂ© sur un RPAS ou une motoneige. Ces outils Ă©paulent la modĂ©lisation du couvert neigeux en fournissant des donnĂ©es sur un secteur, plutĂŽt que sur un seul point. Les donnĂ©es peuvent servir Ă  entraĂźner et Ă  valider les modĂšles. Ainsi amĂ©liorĂ©s, ils peuvent, par exemple, permettre de prĂ©dire la taille, le niveau de santĂ© et les dĂ©placements de populations d’ongulĂ©s, dont la survie dĂ©pend de la qualitĂ© de la neige. (Langlois et coll., 2017.) Au mĂȘme titre que la validation de modĂšles de neige, les outils prĂ©sentĂ©s permettent de comparer et de valider d’autres donnĂ©es de tĂ©lĂ©dĂ©tection (par ex. satellites) et d’élargir notre champ de comprĂ©hension. Finalement, les cartes ainsi crĂ©Ă©es peuvent aider les Ă©cologistes Ă  Ă©valuer l’état d’un Ă©cosystĂšme en leur donnant accĂšs Ă  une plus grande quantitĂ© d’information sur le manteau neigeux qu’avec les coupes de neige traditionnelles.Abstract: Snow is highly variable in time and space and thus many observation points are needed to describe the present state of the snowpack accurately. This description of the state of the snowpack is necessary to validate and improve snow modeling efforts and remote sensing applications. The traditional snowpit analysis delivers a highly detailed picture of the present state of the snow in a particular location but lacks the distribution in space and time as it is a time-consuming method. On the opposite end of the spatial scale are orbital solutions covering the surface of the Earth in regular intervals, but at the cost of a much lower resolution. To improve the ability to collect spatial snow data efficiently during a field campaign, we developed a custom-made, remotely piloted aircraft system (RPAS) to deliver snow depth maps over a few hundred square meters by using Structure-from-Motion (SfM). The RPAS is capable of flying in extremely low temperatures where no commercial solutions are available. The system achieves a horizontal resolution of 6 cm with snow depth RMSE of 39% without vegetation (48.5% with vegetation) As the SfM method does not distinguish between different snow layers, I developed an algorithm for a frequency modulated continuous wave (FMCW) radar that distinguishes between the two main snow layers that are found regularly in the Arctic: “Depth Hoar” and “Wind Slab”. The distinction is important as these characteristics allow to determine the amount of water stored in the snow that will be available for the ecosystem during the melt season. Depending on site conditions, the radar estimates the snow depth with an RMSE between 13% and 39%. v Finally, I equipped the radar with a high precision geolocation system. With this setup, the geolocation uncertainty of the radar on average < 5 cm. From the radar measurement, the distance to the top and the bottom of the snowpack can be extracted. In addition to snow depth, it also delivers data points to interpolate an elevation model of the underlying solid surface. I used the Triangular Irregular Network (TIN) method for any interpolation. The system can be mounted on RPAS and snowmobiles and thus delivers a lot of flexibility. These tools will assist snow modeling as they provide data from an area instead of a single point. The data can be used to force or validate the models. Improved models will help to predict the size, health, and movements of ungulate populations, as their survival depends on it (Langlois et al., 2017). Similar to the validation of snow models, the presented tools allow a comparison and validation of other remote sensing data (e.g. satellite) and improve the understanding limitations. Finally, the resulting maps can be used by ecologist to better asses the state of the ecosystem as they have a more complete picture of the snow cover on a larger scale that it could be achieved with traditional snowpits
    • 

    corecore