228 research outputs found

    Real Time Airborne Monitoring for Disaster and Traffic Applications

    Get PDF
    Remote sensing applications like disaster or mass event monitoring need the acquired data and extracted information within a very short time span. Airborne sensors can acquire the data quickly and on-board processing combined with data downlink is the fastest possibility to achieve this requirement. For this purpose, a new low-cost airborne frame camera system has been developed at the German Aerospace Center (DLR) named 3K-camera. The pixel size and swath width range between 15 cm to 50 cm and 2.5 km to 8 km respectively. Within two minutes an area of approximately 10 km x 8 km can be monitored. Image data are processed onboard on five computers using data from a real time GPS/IMU system including direct georeferencing. Due to high frequency image acquisition (3 images/second) the monitoring of moving objects like vehicles and people is performed allowing wide area detailed traffic monitoring

    Drogen, Krieg und Diplomatie. Der Opiumkrieg 1839–1842 und seine Auswirkungen auf die Beziehungen zwischen Großbritannien und China

    Get PDF
    Drugs, War, and Diplomacy. The Opium War 1839–1842 and its effects on Anglo-Chinese Relations This paper examines the reasons for the outbreak of the so-called Opium War between Great Britain and China and its consequences. It will be shown that the incident indeed brought about significant changes for China’s view of and relations to Western powers. Considering short-term as well as long-term preconditions for the hostilities, Britain’s growing annoyance with Chinese trade restrictions considered outdated and China’s efforts to shut down the trafficking of opium emerge as the war’s major causes

    EAGLE: Large-scale Vehicle Detection Dataset in Real-World Scenarios using Aerial Imagery

    Full text link
    Multi-class vehicle detection from airborne imagery with orientation estimation is an important task in the near and remote vision domains with applications in traffic monitoring and disaster management. In the last decade, we have witnessed significant progress in object detection in ground imagery, but it is still in its infancy in airborne imagery, mostly due to the scarcity of diverse and large-scale datasets. Despite being a useful tool for different applications, current airborne datasets only partially reflect the challenges of real-world scenarios. To address this issue, we introduce EAGLE (oriEnted vehicle detection using Aerial imaGery in real-worLd scEnarios), a large-scale dataset for multi-class vehicle detection with object orientation information in aerial imagery. It features high-resolution aerial images composed of different real-world situations with a wide variety of camera sensor, resolution, flight altitude, weather, illumination, haze, shadow, time, city, country, occlusion, and camera angle. The annotation was done by airborne imagery experts with small- and large-vehicle classes. EAGLE contains 215,986 instances annotated with oriented bounding boxes defined by four points and orientation, making it by far the largest dataset to date in this task. It also supports researches on the haze and shadow removal as well as super-resolution and in-painting applications. We define three tasks: detection by (1) horizontal bounding boxes, (2) rotated bounding boxes, and (3) oriented bounding boxes. We carried out several experiments to evaluate several state-of-the-art methods in object detection on our dataset to form a baseline. Experiments show that the EAGLE dataset accurately reflects real-world situations and correspondingly challenging applications.Comment: Accepted in ICPR 202

    The TUM-DLR Multimodal Earth Observation Evaluation Benchmark

    Get PDF
    We present a new dataset for development, benchmarking, and evaluation of remote sensing and earth observation approaches with special focus on converging perspectives. In order to provide data with different modalities, we observed the same scene using satellites, airplanes, unmanned aerial vehicles (UAV), and smartphones. The dataset is further complemented by ground-truth information and baseline results for different application scenarios. The provided data can be freely used by anybody interested in remote sensing and earth observation and will be continuously augmented and updated

    Automatic Pole Detection in Aerial and Satellite Imagery for precise Image Registration with SAR Ground Control Points

    Get PDF
    The world-wide absolute geographic positioning accuracy of optical Satellite imagery is mostly about a few pixels of the image resolution. So for example WorldView-3 images have a CE90 of about 4 m. Also the direct georeferencing without ground control information of aerial imagery is in the same range of one to a few metres. These inaccuracies originate predominantly in uncertainties of angular measurements for the sensor attitude. An angular error of only one arc-second at a satellite 750 km above ground results in an absolute error on ground of 3.6 metres. On the other hand radar satellites like TerraSAR-X or TanDEM-X do not measure angles but signal runtimes. So if we identify the same point in an optical image and in a radar image we can solve the problem of inaccurate angle-measurements in the optical sensor models and are able to georeference optical images world wide absolute to below one pixel. In this paper we present a method for identification of point-objects which can be detected in both types of images: the footpoints of poles. If such a footpoint of a pole can be detected simultaneously in both types of images the geoposition of the optical image can be corrected to the accuracy of the point-measurement in the radar image. To achieve a high accuracy also a nearly perfect correction of all errors in signal propagation times of the radar signals has to be conducted. In this paper we describe how the footpoints of poles will be extracted in optical spaceborne or air-borne imagery and how these footpoints are correlated to the potential footpoints of poles detected in the radar imagery

    Road condition assessment from aerial imagery using deep learning

    Get PDF
    Terrestrial sensors are commonly used to inspect and document the condition of roads at regular intervals and according to defined rules. For example in Germany, extensive data and information is obtained, which is stored in the Federal Road Information System and made available in particular for deriving necessary decisions. Transverse and longitudinal evenness, for example, are recorded by vehicles using laser techniques. To detect damage to the road surface, images are captured and recorded using area or line scan cameras. All these methods provide very accurate information about the condition of the road, but are time-consuming and costly. Aerial imagery (e.g. multi- or hyperspectral, SAR) provide an additional possibility for the acquisition of the specific parameters describing the condition of roads, yet a direct transfer from objects extractable from aerial imagery to the required objects or parameters, which determine the condition of the road is difficult and in some cases impossible. In this work, we investigate the transferability of objects commonly used for the terrestrial-based assessment of road surfaces to an aerial image-based assessment. In addition, we generated a suitable dataset and developed a deep learning based image segmentation method capable of extracting two relevant road condition parameters from high-resolution multispectral aerial imagery, namely cracks and working seams. The obtained results show that our models are able to extraction these thin features from aerial images, indicating the possibility of using more automated approaches for road surface condition assessment in the future

    Segment-and-count: Vehicle Counting in Aerial Imagery using Atrous Convolutional Neural Networks

    Get PDF
    High-resolution aerial imagery can provide detailed and in some cases even real-time information about traffic related objects. Vehicle localization and counting using aerial imagery play an important role in a broad range of applications. Recently, convolutional neural networks (CNNs) with atrous convolution layers have shown better performance for semantic segmentation compared to conventional convolutional aproaches. In this work, we propose a joint vehicle segmentation and counting method based on atrous convolutional layers. This method uses a multi-task loss function to simultaneously reduce pixel-wise segmentation and vehicle counting errors. In addition, the rectangular shapes of vehicle segmentations are refined using morphological operations. In order to evaluate the proposed methodology, we apply it to the public "DLR 3K" benchmark dataset which contains aerial images with a ground sampling distance of 13 cm. Results show that our proposed method reaches 81.58% mean intersection over union in vehicle segmentation and shows an accuracy of 91.12% in vehicle counting, outperforming the baselines

    Deep-Learning segmentation and 3D reconstruction of road markings using multi-view aerial imagery

    Get PDF
    The 3D information of road infrastructures are gaining importance with the development of autonomous driving. In this context, the exact 2D position of the road markings as well as the height information play an important role in e.g. lane-accurate self-localization of autonomous vehicles. In this paper, the overall task is divided into an automatic segmentation followed by a refined 3D reconstruction. For the segmentation task, we apply a wavelet-enhanced fully convolutional network on multi-view high-resolution aerial imagery. Based on the resulting 2D segments in the original images, we propose a successive workflow for the 3D reconstruction of road markings based on a least-squares line-fitting in multi-view imagery. The 3D reconstruction exploits the line character of road markings with the aim to optimize the best 3D line location by minimizing the distance from its back projection to the detected 2D line in all the covering images. Results show an improved IoU of the automatic road marking segmentation by exploiting the multi-view character of the aerial images and a more accurate 3D reconstruction of the road surface compared to the Semi Global Matching (SGM) algorithm. Further, the approach avoids the matching problem in non-textured image parts and is not limited to lines of finite length. In this paper, the approach is presented and validated on several aerial image data sets covering different scenarios like motorways and urban regions

    A 2D/3D multimodal data simulation approach with applications on urban semantic segmentation, building extraction and change detection

    Get PDF
    Advances in remote sensing image processing techniques have further increased the demand for annotated datasets. However, preparing annotated multi-temporal 2D/3D multimodal data is especially challenging, both for the increased costs of the annotation step and the lack of multimodal acquisitions available on the same area. We introduce the Simulated Multimodal Aerial Remote Sensing (SMARS) dataset, a synthetic dataset aimed at the tasks of urban semantic segmentation, change detection, and building extraction, along with a description of the pipeline to generate them and the parameters required to set our rendering. Samples in the form of orthorectified photos, digital surface models and ground truth for all the tasks are provided. Unlike existing datasets, orthorectified images and digital surface models are derived from synthetic images using photogrammetry, yielding more realistic simulations of the data. The increased size of SMARS, compared to available datasets of this kind, facilitates both traditional and deep learning algorithms. Reported experiments from state-of-the-art algorithms on SMARS scenes yield satisfactory results, in line with our expectations. Both benefits of the SMARS datasets and constraints imposed by its use are discussed. Specifically, building detection on the SMARS-real Potsdam cross-domain test demonstrates the quality and the advantages of proposed synthetic data generation workflow. SMARS is published as an ISPRS benchmark dataset and can be downloaded from https://www2.isprs.org/commissions/comm1/wg8/benchmark_smar

    Providentia - A Large-Scale Sensor System for the Assistance of Autonomous Vehicles and Its Evaluation

    Get PDF
    The environmental perception of an autonomous vehicle is limited by its physical sensor ranges and algorithmic performance, as well as by occlusions that degrade its understanding of an ongoing traffic situation. This not only poses a significant threat to safety and limits driving speeds, but it can also lead to inconvenient maneuvers. Intelligent Infrastructure Systems can help to alleviate these problems. An Intelligent Infrastructure System can fill in the gaps in a vehicle's perception and extend its field of view by providing additional detailed information about its surroundings, in the form of a digital model of the current traffic situation, i.e. a digital twin. However, detailed descriptions of such systems and working prototypes demonstrating their feasibility are scarce. In this paper, we propose a hardware and software architecture that enables such a reliable Intelligent Infrastructure System to be built. We have implemented this system in the real world and demonstrate its ability to create an accurate digital twin of an extended highway stretch, thus enhancing an autonomous vehicle's perception beyond the limits of its on-board sensors. Furthermore, we evaluate the accuracy and reliability of the digital twin by using aerial images and earth observation methods for generating ground truth data
    • …
    corecore