202 research outputs found

    Towards better classification of land cover and land use based on convolutional neural networks

    Get PDF
    Land use and land cover are two important variables in remote sensing. Commonly, the information of land use is stored in geospatial databases. In order to update such databases, we present a new approach to determine the land cover and to classify land use objects using convolutional neural networks (CNN). High-resolution aerial images and derived data such as digital surface models serve as input. An encoder-decoder based CNN is used for land cover classification. We found a composite including the infrared band and height data to outperform RGB images in land cover classification. We also propose a CNN-based methodology for the prediction of land use label from the geospatial databases, where we use masks representing object shape, the RGB images and the pixel-wise class scores of land cover as input. For this task, we developed a two-branch network where the first branch considers the whole area of an image, while the second branch focuses on a smaller relevant area. We evaluated our methods using two sites and achieved an overall accuracy of up to 89.6% and 81.7% for land cover and land use, respectively. We also tested our methods for land cover classification using the Vaihingen dataset of the ISPRS 2D semantic labelling challenge and achieved an overall accuracy of 90.7%. © Authors 2019

    Investigations on feature similarity and the impact of training data for land cover classification

    Get PDF
    Fully convolutional neural networks (FCN) are successfully used for pixel-wise land cover classification - the task of identifying the physical material of the Earth's surface for every pixel in an image. The acquisition of large training datasets is challenging, especially in remote sensing, but necessary for a FCN to perform well. One way to circumvent manual labelling is the usage of existing databases, which usually contain a certain amount of label noise when combined with another data source. As a first part of this work, we investigate the impact of training data on a FCN. We experiment with different amounts of training data, varying w.r.t. the covered area, the available acquisition dates and the amount of label noise. We conclude that the more data is used for training, the better is the generalization performance of the model, and the FCN is able to mitigate the effect of label noise to a high degree. Another challenge is the imbalanced class distribution in most real-world datasets, which can cause the classifier to focus on the majority classes, leading to poor classification performance for minority classes. To tackle this problem, in this paper, we use the cosine similarity loss to force feature vectors of the same class to be close to each other in feature space. Our experiments show that the cosine loss helps to obtain more similar feature vectors, but the similarity of the cluster centers also increases

    Mounting calibration of a multi-view camera system on a uav platform

    Get PDF
    Multi-view camera systems are used more and more frequently for applications in close-range photogrammetry, engineering geodesy and autonomous navigation, since they can cover a large portion of the environment and are considerably cheaper than alternative sensors such as laser scanners. In many cases, the cameras do not have overlapping fields of view. In this paper, we report on the development of such a system mounted on a rigid aluminium platform, and focus on its geometric system calibration. We present an approach for estimating the exterior orientation of such a multi-camera system based on bundle adjustment. We use a static environment with ground control points, which are related to the platform via a laser tracker. In the experimental part, the precision and partly accuracy that can be achieved in different scenarios is investigated. While we show that the accuracy potential of the platform is very high, the mounting calibration parameters are not necessarily precise enough to be used as constant values after calibration. However, this disadvantage can be mitigated by using those parameters as observations and refining them on-the-job

    Accurate matching and reconstruction of line features from ultra high resolution stereo aerial images

    Get PDF
    In this study, a new reconstruction approach is proposed for the line segments that are nearly-aligned(<= 10 degrees) with the epipolar line. The method manipulates the redundancy inherent in line pair-relations to generate artificial 3D point entities and utilize those entities during the estimation process to improve the height values of the reconstructed line segments. The best point entities for the reconstruction are selected based on a newly proposed weight function. To test the performance of the proposed approach, we selected three test patches over a built up area of the city of Vaihingen-Germany. Based on the results, the proposed approach produced highly promising reconstruction results for the line segments that are nearly-aligned with the epipolar line

    Building Detection by Dempster-Shafer Fusion of LIDAR Data and Multispectral Aerial Imagery

    Get PDF
    A method for the classification of land cover in urban areas by the fusion of first and last pulse LIDAR data and multi-spectral images is presented. Apart from buildings, the classes "tree", "grass land", and "bare soil" are also distinguished by a classification method based on the theory of Dempster-Shafer for data fusion. Examples are given for a test site in Germany

    Marked point processes for the automatic detection of bomb craters in aerial wartime images

    Get PDF
    Many countries were the target of air strikes during the Second World War. The aftermath of such attacks is felt until today, as numerous unexploded bombs or duds still exist in the ground. Typically, such areas are documented in so-called impact maps, which are based on detected bomb craters. This paper proposes a stochastic approach to automatically detect bomb craters in aerial wartime images that were taken during World War II. In this work, one aspect we investigate is the type of object model for the crater: we compare circles with ellipses. The respective models are embedded in the probabilistic framework of marked point processes. By means of stochastic sampling the most likely configuration of objects within the scene is determined. Each configuration is evaluated using an energy function which describes the conformity with a predefined model. High gradient magnitudes along the border of the object are favoured and overlapping objects are penalized. In addition, a term that requires the grey values inside the object to be homogeneous is investigated. Reversible Jump Markov Chain Monte Carlo sampling in combination with simulated annealing provides the global optimum of the energy function. Afterwards, a probability map is generated from the automatic detections via kernel density estimation. By setting a threshold, areas around the detections are classified as contaminated or uncontaminated sites, respectively, which results in an impact map. Our results, based on 22 aerial wartime images, show the general potential of the method for the automated detection of bomb craters and the subsequent automatic generation of an impact map. © Authors 2019

    Investigations on skip-connections with an additional cosine similarity loss for land cover classification

    Get PDF
    Pixel-based land cover classification of aerial images is a standard task in remote sensing, whose goal is to identify the physical material of the earth's surface. Recently, most of the well-performing methods rely on encoder-decoder structure based convolutional neural networks (CNN). In the encoder part, many successive convolution and pooling operations are applied to obtain features at a lower spatial resolution, and in the decoder part these features are up-sampled gradually and layer by layer, in order to make predictions in the original spatial resolution. However, the loss of spatial resolution caused by pooling affects the final classification performance negatively, which is compensated by skip-connections between corresponding features in the encoder and the decoder. The most popular ways to combine features are element-wise addition of feature maps and 1x1 convolution. In this work, we investigate skip-connections. We argue that not every skip-connections are equally important. Therefore, we conducted experiments designed to find out which skip-connections are important. Moreover, we propose a new cosine similarity loss function to utilize the relationship of the features of the pixels belonging to the same category inside one mini-batch, i.e.These features should be close in feature space. Our experiments show that the new cosine similarity loss does help the classification. We evaluated our methods using the Vaihingen and Potsdam dataset of the ISPRS 2D semantic labelling challenge and achieved an overall accuracy of 91.1% for both test sites. © Authors 2020. All rights reserved

    Contextual classification of point cloud data by exploiting individual 3d neigbourhoods

    Get PDF
    The fully automated analysis of 3D point clouds is of great importance in photogrammetry, remote sensing and computer vision. For reliably extracting objects such as buildings, road inventory or vegetation, many approaches rely on the results of a point cloud classification, where each 3D point is assigned a respective semantic class label. Such an assignment, in turn, typically involves statistical methods for feature extraction and machine learning. Whereas the different components in the processing workflow have extensively, but separately been investigated in recent years, the respective connection by sharing the results of crucial tasks across all components has not yet been addressed. This connection not only encapsulates the interrelated issues of neighborhood selection and feature extraction, but also the issue of how to involve spatial context in the classification step. In this paper, we present a novel and generic approach for 3D scene analysis which relies on (i) individually optimized 3D neighborhoods for (ii) the extraction of distinctive geometric features and (iii) the contextual classification of point cloud data. For a labeled benchmark dataset, we demonstrate the beneficial impact of involving contextual information in the classification process and that using individual 3D neighborhoods of optimal size significantly increases the quality of the results for both pointwise and contextual classification

    An iterative inference procedure applying conditional random fields for simultaneous classification of land cover and land use

    Get PDF
    Land cover and land use exhibit strong contextual dependencies. We propose a novel approach for the simultaneous classification of land cover and land use, where semantic and spatial context is considered. The image sites for land cover and land use classification form a hierarchy consisting of two layers: a land cover layer and a land use layer. We apply Conditional Random Fields (CRF) at both layers. The layers differ with respect to the image entities corresponding to the nodes, the employed features and the classes to be distinguished. In the land cover layer, the nodes represent super-pixels; in the land use layer, the nodes correspond to objects from a geospatial database. Both CRFs model spatial dependencies between neighbouring image sites. The complex semantic relations between land cover and land use are integrated in the classification process by using contextual features. We propose a new iterative inference procedure for the simultaneous classification of land cover and land use, in which the two classification tasks mutually influence each other. This helps to improve the classification accuracy for certain classes. The main idea of this approach is that semantic context helps to refine the class predictions, which, in turn, leads to more expressive context information. Thus, potentially wrong decisions can be reversed at later stages. The approach is designed for input data based on aerial images. Experiments are carried out on a test site to evaluate the performance of the proposed method. We show the effectiveness of the iterative inference procedure and demonstrate that a smaller size of the super-pixels has a positive influence on the classification result

    Supervised detection of bomb craters in historical aerial images using convolutional neural networks

    Get PDF
    The aftermath of the air strikes during World War II is still present today. Numerous bombs dropped by planes did not explode, still exist in the ground and pose a considerable explosion hazard. Tracking down these duds can be tackled by detecting bomb craters. The existence of a dud can be inferred from the existence of a crater. This work proposes a method for the automatic detection of bomb craters in aerial wartime images. First of all, crater candidates are extracted from an image using a blob detector. Based on given crater references, for every candidate it is checked whether it, in fact, represents a crater or not. Candidates from various aerial images are used to train, validate and test Convolutional Neural Networks (CNNs) in the context of a two-class classification problem. A loss function (controlling what the CNNs are learning) is adapted to the given task. The trained CNNs are then used for the classification of crater candidates. Our work focuses on the classification of crater candidates and we investigate if combining data from related domains is beneficial for the classification. We achieve a F1-score of up to 65.4% when classifying crater candidates with a realistic class distribution. © Authors 2019. CC BY 4.0 License
    corecore