1,173 research outputs found

    High-resolution optical and SAR image fusion for building database updating

    Get PDF
    This paper addresses the issue of cartographic database (DB) creation or updating using high-resolution synthetic aperture radar and optical images. In cartographic applications, objects of interest are mainly buildings and roads. This paper proposes a processing chain to create or update building DBs. The approach is composed of two steps. First, if a DB is available, the presence of each DB object is checked in the images. Then, we verify if objects coming from an image segmentation should be included in the DB. To do those two steps, relevant features are extracted from images in the neighborhood of the considered object. The object removal/inclusion in the DB is based on a score obtained by the fusion of features in the framework of Dempster–Shafer evidence theory

    Oil Spill Segmentation in Fused Synthetic Aperture Radar Images

    Get PDF
    Synthetic Aperture Radar (SAR) satellite systems are very efficient in oil spill monitoring due to their capability to operate under all weather conditions. Systems such as the Envisat and RADARSAT have been used independently in many studies to detect oil spill. This paper presents an automatic feature based image registration and fusion algorithm for oil spill monitoring using SAR images. A range of metrics are used to evaluate the performance of the algorithm and to demonstrate the benefits of fusing SAR images of different modalities. The proposed framework has shown 45% improvement of the oil spill location when compared with the individual images before the fusio

    Deep learning in remote sensing: a review

    Get PDF
    Standing at the paradigm shift towards data-intensive science, machine learning techniques are becoming increasingly important. In particular, as a major breakthrough in the field, deep learning has proven as an extremely powerful tool in many fields. Shall we embrace deep learning as the key to all? Or, should we resist a 'black-box' solution? There are controversial opinions in the remote sensing community. In this article, we analyze the challenges of using deep learning for remote sensing data analysis, review the recent advances, and provide resources to make deep learning in remote sensing ridiculously simple to start with. More importantly, we advocate remote sensing scientists to bring their expertise into deep learning, and use it as an implicit general model to tackle unprecedented large-scale influential challenges, such as climate change and urbanization.Comment: Accepted for publication IEEE Geoscience and Remote Sensing Magazin

    Robust Ground Target Detection by SAR and IR Sensor Fusion Using Adaboost-Based Feature Selection

    Get PDF
    Long-range ground targets are difficult to detect in a noisy cluttered environment using either synthetic aperture radar (SAR) images or infrared (IR) images. SAR-based detectors can provide a high detection rate with a high false alarm rate to background scatter noise. IR-based approaches can detect hot targets but are affected strongly by the weather conditions. This paper proposes a novel target detection method by decision-level SAR and IR fusion using an Adaboost-based machine learning scheme to achieve a high detection rate and low false alarm rate. The proposed method consists of individual detection, registration, and fusion architecture. This paper presents a single framework of a SAR and IR target detection method using modified Boolean map visual theory (modBMVT) and feature-selection based fusion. Previous methods applied different algorithms to detect SAR and IR targets because of the different physical image characteristics. One method that is optimized for IR target detection produces unsuccessful results in SAR target detection. This study examined the image characteristics and proposed a unified SAR and IR target detection method by inserting a median local average filter (MLAF, pre-filter) and an asymmetric morphological closing filter (AMCF, post-filter) into the BMVT. The original BMVT was optimized to detect small infrared targets. The proposed modBMVT can remove the thermal and scatter noise by the MLAF and detect extended targets by attaching the AMCF after the BMVT. Heterogeneous SAR and IR images were registered automatically using the proposed RANdom SAmple Region Consensus (RANSARC)-based homography optimization after a brute-force correspondence search using the detected target centers and regions. The final targets were detected by feature-selection based sensor fusion using Adaboost. The proposed method showed good SAR and IR target detection performance through feature selection-based decision fusion on a synthetic database generated by OKTAL-SE.1146Ysciescopu

    Enhanced phase congruency feature-based image registration for multimodal remote sensing imagery

    Get PDF
    Multimodal image registration is an essential image processing task in remote sensing. Basically, multimodal image registration searches for optimal alignment between images captured by different sensors for the same scene to provide better visualization and more informative images. Manual image registration is a tedious task and requires more effort, hence developing an automated image registration is very crucial to provide a faster and reliable solution. However, image registration faces many challenges from the nature of remote sensing image, the environment, and the technical shortcoming of the current methods that cause three issues, namely intensive processing power, local intensity variation, and rotational distortion. Since not all image details are significant, relying on the salient features will be more efficient in terms of processing power. Thus, the feature-based registration method was adopted as an efficient method to avoid intensive processing. The proposed method resolves rotation distortion issue using Oriented FAST and Rotated BRIEF (ORB) to produce invariant rotation features. However, since it is not intensity invariant, it cannot support multimodal data. To overcome the intensity variations issue, Phase Congruence (PC) was integrated with ORB to introduce ORB-PC feature extraction to generate feature invariance to rotation distortion and local intensity variation. However, the solution is not complete since the ORB-PC matching rate is below the expectation. Enhanced ORB-PC was proposed to solve the matching issue by modifying the feature descriptor. While better feature matches were achieved, a high number of outliers from multimodal data makes the common outlier removal methods unsuccessful. Therefore, the Normalized Barycentric Coordinate System (NBCS) outlier removal was utilized to find precise matches even with a high number of outliers. The experiments were conducted to verify the registration qualitatively and quantitatively. The qualitative experiment shows the proposed method has a broader and better features distribution, while the quantitative evaluation indicates improved performance in terms of registration accuracy by 18% compared to the related works

    On the use of smartphones as novel photogrammetric water gauging instruments: Developing tools for crowdsourcing water levels

    Get PDF
    The term global climate change is omnipresent since the beginning of the last decade. Changes in the global climate are associated with an increase in heavy rainfalls that can cause nearly unpredictable flash floods. Consequently, spatio-temporally high-resolution monitoring of rivers becomes increasingly important. Water gauging stations continuously and precisely measure water levels. However, they are rather expensive in purchase and maintenance and are preferably installed at water bodies relevant for water management. Small-scale catchments remain often ungauged. In order to increase the data density of hydrometric monitoring networks and thus to improve the prediction quality of flood events, new, flexible and cost-effective water level measurement technologies are required. They should be oriented towards the accuracy requirements of conventional measurement systems and facilitate the observation of water levels at virtually any time, even at the smallest rivers. A possible solution is the development of a photogrammetric smartphone application (app) for crowdsourcing water levels, which merely requires voluntary users to take pictures of a river section to determine the water level. Today’s smartphones integrate high-resolution cameras, a variety of sensors, powerful processors, and mass storage. However, they are designed for the mass market and use low-cost hardware that cannot comply with the quality of geodetic measurement technology. In order to investigate the potential for mobile measurement applications, research was conducted on the smartphone as a photogrammetric measurement instrument as part of the doctoral project. The studies deal with the geometric stability of smartphone cameras regarding device-internal temperature changes and with the accuracy potential of rotation parameters measured with smartphone sensors. The results show a high, temperature-related variability of the interior orientation parameters, which is why the calibration of the camera should be carried out during the immediate measurement. The results of the sensor investigations show considerable inaccuracies when measuring rotation parameters, especially the compass angle (errors up to 90° were observed). The same applies to position parameters measured by global navigation satellite system (GNSS) receivers built into smartphones. According to the literature, positional accuracies of about 5 m are possible in best conditions. Otherwise, errors of several 10 m are to be expected. As a result, direct georeferencing of image measurements using current smartphone technology should be discouraged. In consideration of the results, the water gauging app Open Water Levels (OWL) was developed, whose methodological development and implementation constituted the core of the thesis project. OWL enables the flexible measurement of water levels via crowdsourcing without requiring additional equipment or being limited to specific river sections. Data acquisition and processing take place directly in the field, so that the water level information is immediately available. In practice, the user captures a short time-lapse sequence of a river bank with OWL, which is used to calculate a spatio-temporal texture that enables the detection of the water line. In order to translate the image measurement into 3D object space, a synthetic, photo-realistic image of the situation is created from existing 3D data of the river section to be investigated. Necessary approximations of the image orientation parameters are measured by smartphone sensors and GNSS. The assignment of camera image and synthetic image allows for the determination of the interior and exterior orientation parameters by means of space resection and finally the transfer of the image-measured 2D water line into the 3D object space to derive the prevalent water level in the reference system of the 3D data. In comparison with conventionally measured water levels, OWL reveals an accuracy potential of 2 cm on average, provided that synthetic image and camera image exhibit consistent image contents and that the water line can be reliably detected. In the present dissertation, related geometric and radiometric problems are comprehensively discussed. Furthermore, possible solutions, based on advancing developments in smartphone technology and image processing as well as the increasing availability of 3D reference data, are presented in the synthesis of the work. The app Open Water Levels, which is currently available as a beta version and has been tested on selected devices, provides a basis, which, with continuous further development, aims to achieve a final release for crowdsourcing water levels towards the establishment of new and the expansion of existing monitoring networks.Der Begriff des globalen Klimawandels ist seit Beginn des letzten Jahrzehnts allgegenwĂ€rtig. Die VerĂ€nderung des Weltklimas ist mit einer Zunahme von Starkregenereignissen verbunden, die nahezu unvorhersehbare Sturzfluten verursachen können. Folglich gewinnt die raumzeitlich hochaufgelöste Überwachung von FließgewĂ€ssern zunehmend an Bedeutung. Pegelmessstationen erfassen kontinuierlich und prĂ€zise WasserstĂ€nde, sind jedoch in Anschaffung und Wartung sehr teuer und werden vorzugsweise an wasserwirtschaftlich-relevanten GewĂ€ssern installiert. Kleinere GewĂ€sser bleiben hĂ€ufig unbeobachtet. Um die Datendichte hydrometrischer Messnetze zu erhöhen und somit die VorhersagequalitĂ€t von Hochwasserereignissen zu verbessern, sind neue, kostengĂŒnstige und flexibel einsetzbare Wasserstandsmesstechnologien erforderlich. Diese sollten sich an den Genauigkeitsanforderungen konventioneller Messsysteme orientieren und die Beobachtung von WasserstĂ€nden zu praktisch jedem Zeitpunkt, selbst an den kleinsten FlĂŒssen, ermöglichen. Ein Lösungsvorschlag ist die Entwicklung einer photogrammetrischen Smartphone-Anwendung (App) zum Crowdsourcing von WasserstĂ€nden mit welcher freiwillige Nutzer lediglich Bilder eines Flussabschnitts aufnehmen mĂŒssen, um daraus den Wasserstand zu bestimmen. Heutige Smartphones integrieren hochauflösende Kameras, eine Vielzahl von Sensoren, leistungsfĂ€hige Prozessoren und Massenspeicher. Sie sind jedoch fĂŒr den Massenmarkt konzipiert und verwenden kostengĂŒnstige Hardware, die nicht der QualitĂ€t geodĂ€tischer Messtechnik entsprechen kann. Um das Einsatzpotential in mobilen Messanwendungen zu eruieren, sind Untersuchungen zum Smartphone als photogrammetrisches Messinstrument im Rahmen des Promotionsprojekts durchgefĂŒhrt worden. Die Studien befassen sich mit der geometrischen StabilitĂ€t von Smartphone-Kameras bezĂŒglich gerĂ€teinterner TemperaturĂ€nderungen und mit dem Genauigkeitspotential von mit Smartphone-Sensoren gemessenen Rotationsparametern. Die Ergebnisse zeigen eine starke, temperaturbedingte VariabilitĂ€t der inneren Orientierungsparameter, weshalb die Kalibrierung der Kamera zum unmittelbaren Messzeitpunkt erfolgen sollte. Die Ergebnisse der Sensoruntersuchungen zeigen große Ungenauigkeiten bei der Messung der Rotationsparameter, insbesondere des Kompasswinkels (Fehler von bis zu 90° festgestellt). Selbiges gilt auch fĂŒr Positionsparameter, gemessen durch in Smartphones eingebaute EmpfĂ€nger fĂŒr Signale globaler Navigationssatellitensysteme (GNSS). Wie aus der Literatur zu entnehmen ist, lassen sich unter besten Bedingungen Lagegenauigkeiten von etwa 5 m erreichen. Abseits davon sind Fehler von mehreren 10 m zu erwarten. Infolgedessen ist von einer direkten Georeferenzierung von Bildmessungen mittels aktueller Smartphone-Technologie abzusehen. Unter BerĂŒcksichtigung der gewonnenen Erkenntnisse wurde die Pegel-App Open Water Levels (OWL) entwickelt, deren methodische Entwicklung und Implementierung den Kern der Arbeit bildete. OWL ermöglicht die flexible Messung von WasserstĂ€nden via Crowdsourcing, ohne dabei zusĂ€tzliche AusrĂŒstung zu verlangen oder auf spezifische Flussabschnitte beschrĂ€nkt zu sein. Datenaufnahme und Verarbeitung erfolgen direkt im Feld, so dass die Pegelinformationen sofort verfĂŒgbar sind. Praktisch nimmt der Anwender mit OWL eine kurze Zeitraffersequenz eines Flussufers auf, die zur Berechnung einer Raum-Zeit-Textur dient und die Erkennung der Wasserlinie ermöglicht. Zur Übersetzung der Bildmessung in den 3D-Objektraum wird aus vorhandenen 3D-Daten des zu untersuchenden Flussabschnittes ein synthetisches, photorealistisches Abbild der Aufnahmesituation erstellt. Erforderliche NĂ€herungen der Bildorientierungsparameter werden von Smartphone-Sensoren und GNSS gemessen. Die Zuordnung von Kamerabild und synthetischem Bild erlaubt die Bestimmung der inneren und Ă€ußeren Orientierungsparameter mittels rĂ€umlichen RĂŒckwĂ€rtsschnitt. Nach Rekonstruktion der Aufnahmesituation lĂ€sst sich die im Bild gemessene 2D-Wasserlinie in den 3D-Objektraum projizieren und der vorherrschende Wasserstand im Referenzsystem der 3D-Daten ableiten. Im Soll-Ist-Vergleich mit konventionell gemessenen Pegeldaten zeigt OWL ein erreichbares Genauigkeitspotential von durchschnittlich 2 cm, insofern synthetisches und reales Kamerabild einen möglichst konsistenten Bildinhalt aufweisen und die Wasserlinie zuverlĂ€ssig detektiert werden kann. In der vorliegenden Dissertation werden damit verbundene geometrische und radiometrische Probleme ausfĂŒhrlich diskutiert sowie LösungsansĂ€tze, auf der Basis fortschreitender Entwicklungen von Smartphone-Technologie und Bildverarbeitung sowie der zunehmenden VerfĂŒgbarkeit von 3D-Referenzdaten, in der Synthese der Arbeit vorgestellt. Mit der gegenwĂ€rtig als Betaversion vorliegenden und auf ausgewĂ€hlten GerĂ€ten getesteten App Open Water Levels wurde eine Basis geschaffen, die mit kontinuierlicher Weiterentwicklung eine finale Freigabe fĂŒr das Crowdsourcing von WasserstĂ€nden und damit den Aufbau neuer und die Erweiterung bestehender Monitoring-Netzwerke anstrebt

    Geo-rectification and cloud-cover correction of multi-temporal Earth observation imagery

    Get PDF
    Over the past decades, improvements in remote sensing technology have led to mass proliferation of aerial imagery. This, in turn, opened vast new possibilities relating to land cover classification, cartography, and so forth. As applications in these fields became increasingly more complex, the amount of data required also rose accordingly and so, to satisfy these new needs, automated systems had to be developed. Geometric distortions in raw imagery must be rectified, otherwise the high accuracy requirements of the newest applications will not be attained. This dissertation proposes an automated solution for the pre-stages of multi-spectral satellite imagery classification, focusing on Fast Fourier Shift theorem based geo-rectification and multi-temporal cloud-cover correction. By automatizing the first stages of image processing, automatic classifiers can take advantage of a larger supply of image data, eventually allowing for the creation of semi-real-time mapping applications
    • 

    corecore