43 research outputs found

    Combining visibility analysis and deep learning for refinement of semantic 3D building models by conflict classification

    Get PDF
    Semantic 3D building models are widely available and used in numerous applications. Such 3D building models display rich semantics but no façade openings, chiefly owing to their aerial acquisition techniques. Hence, refining models’ façades using dense, street-level, terrestrial point clouds seems a promising strategy. In this paper, we propose a method of combining visibility analysis and neural networks for enriching 3D models with window and door features. In the method, occupancy voxels are fused with classified point clouds, which provides semantics to voxels. Voxels are also used to identify conflicts between laser observations and 3D models. The semantic voxels and conflicts are combined in a Bayesian network to classify and delineate façade openings, which are reconstructed using a 3D model library. Unaffected building semantics is preserved while the updated one is added, thereby upgrading the building model to LoD3. Moreover, Bayesian network results are back-projected onto point clouds to improve points’ classification accuracy. We tested our method on a municipal CityGML LoD2 repository and the open point cloud datasets: TUM-MLS-2016 and TUM-FAÇADE. Validation results revealed that the method improves the accuracy of point cloud semantic segmentation and upgrades buildings with façade elements. The method can be applied to enhance the accuracy of urban simulations and facilitate the development of semantic segmentation algorithms

    EXTRACTION OF SOLAR CELLS FROM UAV-BASED THERMAL IMAGE SEQUENCES

    Get PDF
    This paper discusses the automatic generation of thermal infrared ortho image mosaics and the extraction of solar cells from these ortho image mosaics. Image sequences are recorded by a thermal infrared (TIR) camera mounted on a remotely piloted aerial system (RPAS). The image block is relatively oriented doing a bundle block adjustment and transferred to a local coordinate system using ground control points. The resulting ortho image mosaic is searched for solar cells. A library of templates of solar cells from thermal images is used to learn an implicit shape model. The extraction of the single solar cells is done by estimating corners and centre points of cells using these shape models in a Markov-Chain-Monte-Carlo algorithm by combining four corners and a centre point. As for the limited geometric resolution and radiometric contrast, most of the cells are not directly detected. An iterative process based on the knowledge of the regular grid structure of a solar cell installation is used to predict further cells and verify their existence by repeating the corner extraction and grammar combination. Results show that this work flow is able to detect most of the solar cells under the condition that the cells have a more or less common radiometric behaviour and no reflections i.e. from the sun occur. The cells need a rectangular shape and have the same orientation so that the model of the grammar is applicable to the solar cells

    REGISTRATION OF UAV DATA AND ALS DATA USING POINT TO DEM DISTANCES FOR BATHYMETRIC CHANGE DETECTION

    Get PDF
    This paper shows a method to register point clouds from images of UAV-mounted airborne cameras as well as airborne laser scanner data. The focus is a general technique which does rely neither on linear or planar structures nor on the point cloud density. Therefore, the proposed approach is also suitable for rural areas and water bodies captured via different sensor configurations. This approach is based on a regular 2.5D grid generated from the segmented ground points of the 3D point cloud. It is assumed that initial values for the registration are already estimated, e.g. by measured exterior orientation parameters with the UAV mounted GNSS and IMU. These initial parameters are finely tuned by minimizing the distances between the 3D points of a target point cloud to the generated grid of the source point cloud in an iteration process. To eliminate outliers (e.g., vegetation points) a threshold for the distances is defined dynamically at each iteration step, which filters ground points during the registration. The achieved accuracy of the registration is up to 0.4 m in translation and up to 0.3 degrees in rotation, by using a raster size of the DEM of 2 m. Considering the ground sampling distance of the airborne data which is up to 0.4 m between the scan lines, this result is comparable to the result achieved by an ICP algorithm, but the proposed approach does not rely on point densities and is therefore able to solve registrations where the ICP have difficulties

    Supervised detection of bomb craters in historical aerial images using convolutional neural networks

    Get PDF
    The aftermath of the air strikes during World War II is still present today. Numerous bombs dropped by planes did not explode, still exist in the ground and pose a considerable explosion hazard. Tracking down these duds can be tackled by detecting bomb craters. The existence of a dud can be inferred from the existence of a crater. This work proposes a method for the automatic detection of bomb craters in aerial wartime images. First of all, crater candidates are extracted from an image using a blob detector. Based on given crater references, for every candidate it is checked whether it, in fact, represents a crater or not. Candidates from various aerial images are used to train, validate and test Convolutional Neural Networks (CNNs) in the context of a two-class classification problem. A loss function (controlling what the CNNs are learning) is adapted to the given task. The trained CNNs are then used for the classification of crater candidates. Our work focuses on the classification of crater candidates and we investigate if combining data from related domains is beneficial for the classification. We achieve a F1-score of up to 65.4% when classifying crater candidates with a realistic class distribution. © Authors 2019. CC BY 4.0 License

    FUSION OF 3D POINT CLOUDS WITH TIR IMAGES FOR INDOOR SCENE RECONSTRUCTION

    Get PDF
    Obtaining accurate 3D descriptions in the thermal infrared (TIR) is a quite challenging task due to the low geometric resolutions of TIR cameras and the low number of strong features in TIR images. Combining the radiometric information of the thermal infrared with 3D data from another sensor is able to overcome most of the limitations in the 3D geometric accuracy. In case of dynamic scenes with moving objects or a moving sensor system, a combination with RGB cameras and profile laserscanners is suitable. As a laserscanner is an active sensor in the visible red or near infrared (NIR) and the thermal infrared camera captures the radiation emitted by the objects in the observed scene, the combination of these two sensors for close range applications are independent from external illumination or textures in the scene. This contribution focusses on the fusion of point clouds from terrestrial laserscanners and RGB cameras with images from thermal infrared mounted together on a robot for indoor 3D reconstruction. The system is geometrical calibrated including the lever arm between the different sensors. As the field of view is different for the sensors, the different sensors record the same scene points not exactly at the same time. Thus, the 3D scene points of the laserscanner and the photogrammetric point cloud from the RGB camera have to be synchronized before point cloud fusion and adding the thermal channel to the 3D points

    BUILDING FACADE OBJECT DETECTION FROM TERRESTRIAL THERMAL INFRARED IMAGE SEQUENCES COMBINING DIFFERENT VIEWS

    No full text
    This paper discusses the automatic texturing of building facades from thermal infrared image sequences. A fully automatic method is presented to refine GPS based positions estimating relative orientations of the image sequences including a given building model in a bundle adjustment process. The resulting refined orientation parameters are used to extract partial facade textures from all images and all sequences. The resulting partial textures of every sequence are combined to get complete facade textures in the thermal infrared domain. Textures from different image sequences are combined for object detection and extraction. These sequences are acquired either at different times for different radiometric thermal behavior of facade objects or with different viewing directions for objects located before or behind the facade plane

    AUTOMATIC MAPPING OF GLACIER BASED ON SAR IMAGERY BY BENEFITS OF FREELY OPTICAL AND THERMAL DATA

    No full text
    For many research applications like water resources evaluation, determination of glacier specific changes, and for calculation of the past and future contribution of glaciers to sea-level change, parameters about the size and spatial distribution of glaciers is crucial. In this paper, an automatic method for determination of glacier surface area using single track high resolution TerraSAR-X imagery by benefits of low resolution optical and thermal data is presented. Based on the normalized difference snow index (NDSI) and land surface temperature (LST) map generated from optical and thermal data combined with a surface slope data, a low resolution binary mask was derived used for the supervised classification of glacier using SAR imagery. Then, a set of suitable features is derived from the SAR intensity image, such as the texture information generated based on the gray level co-occurrence matrix (GLCM), and the intensity values. With these features, the glacier surface is discriminated from the background by Random Forests (RF) method

    TOWARDS THE INFLUENCE OF A CAR WINDSHIELD ON DEPTH CALCULATION WITH A STEREO CAMERA SYSTEM

    No full text
    Stereo camera systems in cars are often used to estimate the distance of other road users from the car. This information is important to improve road safety. Such camera systems are typically mounted behind the windshield of the car. In this contribution, the influence of the windshield on the estimated distance values is analyzed. An offline stereo camera calibration is performed with a moving planar calibration target. In a standard procedure bundle adjustment, the relative orientation of the cameras is estimated. The calibration is performed for the identical stereo camera system with and without a windshield in between. The base lengths are derived from the relative orientation in both cases and are compared. Distance values are calculated and analyzed. It can be shown, that the difference of the base length values in the two cases is highly significant. Resulting effects on the distance calculation up to a half meter occur

    QUALITY ASSESSMENT OF MAPPING BUILDING TEXTURES FROM INFRARED IMAGE SEQUENCES

    No full text
    Generation and texturing of building models is a fast developing field of research. Several techniques have been developed to extract building geometry and textures from multiple images and image sequences. In this paper, these techniques are discussed and extended to automatically add new textures from infrared (IR) image sequences to existing building models. In contrast to existing work, geometry and textures are not generated together from the same dataset but the textures are extracted from the image sequence and matched to an existing geo-referenced 3D building model. The texture generation is divided in two main parts. The first part deals with the estimation and refinement of the exterior camera orientation. Feature points are extracted in the images and used as tie points in the sequence. A recorded exterior orientation of the camera s added to these homologous points and a bundle adjustment is performed starting on image pairs and combining the hole sequence. A given 3d model of the observed building is additionally added to introduce further constraint as ground control points in the bundle adjustment. The second part includes the extraction of textures from the images and the combination of textures from different images of the sequence. Using the reconstructed exterior camera orientation for every image of the sequence, the visible facades are projected into the image and texture is extracted. These textures normally contain only parts of the facade. The partial textures extracted from all images are combined to one facade texture. This texture is stored with a 3D reference to the corresponding facade. This allows searching for features in textures and localising those features in 3D space. It will be shown, that the proposed strategy allows texture extraction and mapping even for big building complexes with restricted viewing possibilities and for images with low optical resolution
    corecore