15 research outputs found

    Exploring the Potential of Conditional Adversarial Networks for Optical and SAR Image Matching

    Get PDF
    Tasks such as the monitoring of natural disasters or the detection of change highly benefit from complementary information about an area or a specific object of interest. The required information is provided by fusing high accurate co-registered and geo-referenced datasets. Aligned high resolution optical and synthetic aperture radar (SAR) data additionally enables an absolute geo-location accuracy improvement of the optical images by extracting accurate and reliable ground control points (GCPs) from the SAR images. In this paper we investigate the applicability of a deep learning based matching concept for the generation of precise and accurate GCPs from SAR satellite images by matching optical and SAR images. To this end, conditional generative adversarial networks (cGANs) are trained to generate SAR-like image patches from optical images. For training and testing, optical and SAR image patches are extracted from TerraSAR-X and PRISM image pairs covering greater urban areas spread over Europe. The artificially generated patches are then used to improve the conditions for three known matching approaches based on normalized cross-correlation (NCC), SIFT and BRISK, which are normally not usable for the matching of optical and SAR images. The results validate that a NCC, SIFT and BRISK based matching greatly benefit, in terms of matching accuracy and precision, from the use of the artificial templates. The comparison with two state-of-the-art optical and SAR matching approaches shows the potential of the proposed method but also revealed some challenges and the necessity for further developments

    SAR-to-Optical Image Translation Based on Conditional Generative Adversarial Networks - Optimization, Opportunities and Limits

    Get PDF
    Due to its all time capability, synthetic aperture radar (SAR) remote sensing plays an important role in Earth observation. The ability to interpret the data is limited, even for experts, as the human eye is not familiar to the impact of distance-dependent imaging, signal intensities detected in the radar spectrum as well as image characteristics related to speckle or steps of post-processing. This paper is concerned with machine learning for SAR-to-optical image-to-image translation in order to support the interpretation and analysis of original data. A conditional adversarial network is adopted and optimized in order to generate alternative SAR image representations based on the combination of SAR images (starting point) and optical images (reference) for training. Following this strategy, the focus is set on the value of empirical knowledge for initialization, the impact of results on follow-up applications, and the discussion of opportunities/drawbacks related to this application of deep learning. Case study results are shown for high resolution (SAR: TerraSAR-X, optical: ALOS PRISM) and low resolution (Sentinel-1 and -2) data. The properties of the alternative image representation are evaluated based on feedback from experts in SAR remote sensing and the impact on road extraction as an example for follow-up applications. The results provide the basis to explain fundamental limitations affecting the SAR-to-optical image translation idea but also indicate benefits from alternative SAR image representations

    Parking space inventory from above: Detection on aerial images and estimation for unobserved regions

    Get PDF
    Parking is a vital component of today's transportation system and descriptive data are therefore of great importance for urban planning and traffic management. However, data quality is often low: managed parking places may only be partially inventoried, or parking at the curbside and on private ground may be missing. This paper presents a processing chain in which remote sensing data and statistical methods are combined to provide parking area estimates. First, parking spaces and other traffic areas are detected from aerial imagery using a convolutional neural network. Individual image segmentations are fused to increase completeness. Next, a Gamma hurdle model is estimated using the detected parking areas and OpenStreetMap and land use data to predict the parking area adjacent to streets. We find a systematic relationship between the road length and type and the parking area obtained. We suggest that our results are informative to those needing information on parking in structurally similar regions

    Automatic Object Segmentation To Support Crisis Management Of Large-scale Events

    Get PDF
    The management of large-scale events with a widely distributed camping area is a special challenge for organisers and security forces and requires both comprehensive preparation and attentive monitoring to ensure the safety of the participants. Crucial to this is the availability of up-to-date situational information, e.g. from remote sensing data. In particular, information on the number and distribution of people is important in the event of a crisis in order to be able to react quickly and effectively manage the corresponding rescue and supply logistics. One way to estimate the number of persons especially at night is to classify the type and size of objects such as tents and vehicles on site and to distinguish between objects with and without a sleeping function. In order to make this information available in a timely manner, an automated situation assessment is required. In this work, we have prepared the first high-quality dataset in order to address the aforementioned challenge which contains aerial images over a large-scale festival of different dates. We investigate the feasibility of this task using Convolutional Neural Networks for instance-wise semantic segmentation and carry out several experiments using the Mask-RCNN algorithm and evaluate the results. Results are promising and indicate the possibility of function-based tent classification as a proof-of-concept. The results and thereof discussions can pave the way for future developments and investigations

    Generation of Reference Vehicle Trajectories in real-world Situations using Aerial Imagery from a Helicopter

    Get PDF
    Highly accurate reference vehicle trajectories are required in the automotive domain e.\,g. for testing mobile GNSS devices. Common methods used to determine reference trajectories are based on the same working principles as the device under test and suffer from the same underlying error problems. In this paper, a new method to generate reference vehicle trajectories in real-world situations using simultaneously acquired aerial imagery from a helicopter is presented. This method requires independent height information which is coming from a LIDAR DTM and the relative height of the GNSS device. The reference trajectory is then derived by forward intersection of the vehicle position in each image with the DTM. In this context, the influence of all relevant error sources were analysed, like the error from the LIDAR DTM, from the sensor latency, from the semi-automatic matching of the vehicle marking, and from the image orientation. Results show that the presented method provides a tool for creating reference trajectories that is independent of the GNSS reception at the vehicle. Moreover, it can be demonstrated that the proposed method reaches an accuracy level of 10 cm, which is defined as necessary for certification and validation of automotive GNSS devices

    Geo-localization Refinement of Optical Satellite Images by Embedding Synthetic Aperture Radar Data in Novel Deep Learning Frameworks

    Get PDF
    Every year, the number of applications relying on information extracted from high-resolution satellite imagery increases. In particular, the combined use of different data sources is rising steadily, for example to create high-resolution maps, to detect changes over time or to conduct image classification. In order to correctly fuse information from multiple data sources, the utilized images have to be precisely geometrically registered and have to exhibit a high absolute geo-localization accuracy. Due to the image acquisition process, optical satellite images commonly have an absolute geo-localization accuracy in the order of meters or tens of meters only. On the other hand, images captured by the high-resolution synthetic aperture radar satellite TerraSAR-X can achieve an absolute geo-localization accuracy within a few decimeters and therefore represent a reliable source for absolute geo-localization accuracy improvement of optical data. The main objective of this thesis is to address the challenge of image matching between high resolution optical and synthetic aperture radar (SAR) satellite imagery in order to improve the absolute geo-localization accuracy of the optical images. The different imaging properties of optical and SAR data pose a substantial challenge for a precise and accurate image matching, in particular for the handcrafted feature extraction stage common for traditional optical and SAR image matching methods. Therefore, a concept is required which is carefully tailored to the characteristics of optical and SAR imagery and is able to learn the identification and extraction of relevant features. Inspired by recent breakthroughs in the training of neural networks through deep learning techniques and the subsequent developments for automatic feature extraction and matching methods of single sensor images, two novel optical and SAR image matching methods are developed. Both methods pursue the goal of generating accurate and precise tie points by matching optical and SAR image patches. The foundation of these frameworks is a semi-automatic matching area selection method creating an optimal initialization for the matching approaches, by limiting the geometric differences of optical and SAR image pairs. The idea of the first approach is to eliminate the radiometric differences between the images trough an image-to-image translation with the help of generative adversarial networks and to realize the subsequent image matching through traditional algorithms. The second approach is an end-to-end method in which a Siamese neural network learns to automatically create tie points between image pairs through a targeted training. The geo-localization accuracy improvement of optical images is ultimately achieved by adjusting the corresponding optical sensor model parameters through the generated set of tie points. The quality of the proposed methods is verified using an independent set of optical and SAR image pairs spread over Europe. Thereby, the focus is set on a quantitative and qualitative evaluation of the two tie point generation methods and their ability to generate reliable and accurate tie points. The results prove the potential of the developed concepts, but also reveal weaknesses such as the limited number of training and test data acquired by only one combination of optical and SAR sensor systems. Overall, the tie points generated by both deep learning-based concepts enable an absolute geo-localization improvement of optical images, outperforming state-of-the-art methods

    An Evolutionary Algorithm for fast intensity based image matching between optical and SAR imagery

    Get PDF
    This paper presents a hybrid evolutionary algorithm for fast intensity based matching between satellite imagery from SAR and very high-resolution (VHR) optical sensor systems. The precise and accurate co-registration of image time series and images of different sensors is a key task in multi-sensor image processing scenarios. The necessary preprocessing step of image matching and tie-point detection is divided into a search problem and a similarity measurement. Within this paper we evaluate the use of an evolutionary search strategy for establishing the spatial correspondence between satellite imagery of optical and radar sensors. The aim of the proposed algorithm is to decrease the computational costs during the search process by formulating the search as an optimization problem. Based upon the canonical evolutionary algorithm, the proposed algorithm is adapted for SAR/optical imagery intensity based matching. Extensions are drawn using techniques like hybridization (e.g. local search) and others to lower the number of objective function calls and refine the result. The algorithm significantely decreases the computational costs whilst finding the optimal solution in a reliable way

    Real-time Aerial Imagery for Crisis Management: Lessons Learned from an European Civil Protection Exercise

    Get PDF
    Regular international civil protection exercises are an important part of the European Civil Protection Mechanism. One such exercise, called IRONORE2019, took place in September 2019 in Eisenerz, Austria, with the aim of training international cooperation of relief teams in case of an earthquake. In parallel to this exercise, the European project DRIVER+ conduced a Trail in order to test novel solutions for civil protection. The German Aerospace Center (DLR) provided aerial imagery as well as derived map products to the project and the exercise, which were also made available to the Bavarian Red Cross, among others, as exercise participants. In this way, products developed using the latest scientific methods could be used and tested in practice. The valuable experiences from this operational use, which are explained in this article, serve the enhancement of the processes and products and will be implemented in the future in order to further support disaster management
    corecore