15 research outputs found

    Image Interpolation on the CPU and GPU using Line Run Sequences

    Get PDF
    This paper describes an efficient implementation of an image interpolation algorithm based on inverse distance weighting (IDW). The time-consuming search for support pixels bordering the voids to be filled is facilitated through gapless sweeps of different directions over the image. The scanlines needed for the sweeps are constructed from a path prototype per orientation whose regular substructures get reused and shifted to produce aligned duplicates covering the entire input bitmap. The line set is followed concurrently to detect existing samples around nodata patches and compute the distance to the pixels to be newly set. Since the algorithm relies on integer line rasterization only and does not need auxiliary data structures beyond the output image and weight aggregation bitmap for intensity normalization, it will run on multi-core central and graphics processing units (CPUs and GPUs). Also, occluded support pixels of non-convex void patches are ignored, and over- or undersampling close-by and distant valid neighbors is compensated. Runtime and accuracy compared to generated IDW ground truth get evaluated for the CPU and GPU implementation of the algorithm on single-channel and multispectral bitmaps of various filling degrees

    Lock-free multithreaded semi-global matching with an arbitrary number of path directions

    Get PDF
    This paper describes an efficient implementation of the semi-global matching (SGM) algorithm on multi-core processors that allows a nearly arbitrary number of path directions for the cost aggregation stage. The scanlines for each orientation are discretized iteratively once, and the regular substructures of the obtained template are reused and shifted to concurrently sum up the path cost in at most two sweeps per direction over the disparity space image. Since path overlaps do not occur at any time, no expensive thread synchronization will be needed. To further reduce the runtime on high counts of path directions, pixel-wise disparity gating is applied, and both the cost function and disparity loop of SGM are optimized using current single instruction multiple data (SIMD) intrinsics for two major CPU architectures. Performance evaluation of the proposed implementation on synthetic ground truth reveals a reduced height error if the number of aggregation directions is significantly increased or when the paths start with an angular offset. Overall runtime shows a speedup that is nearly linear to the number of available processors

    High Dynamic Range Image Compression On Commodity Hardware For Real-Time Mapping Applications

    Get PDF
    This paper describes a lossy compression scheme for high dynamic range graylevel and color imagery for data transmission purposes in real-time mapping scenarios. The five stages of the implemented non-standard transform coder are written in portable C++ code and do not require specialized hardware to run. Storage space occupied by the bitmaps is reduced via a color space change, 2D integer discrete cosine transform (DCT) approximation, coefficient quantization, two-size run-length encoding and dictionary matching hinged on the LZ4 algorithm. Quantization matrices to eliminate insignificant DCT coefficients are derived from a representative image set through genetic optimization. The underlying fitness function incorporates the obtained output size, classic image quality metrics and the unique color count. Together with a zone-based adaptation mechanism, this allows to specify target bitrates instead of percentage values or abstract quality factors for the reduction rate to be directly matched to the available communication channel capacities. Results on a camera control unit of a fixed-wing unmanned aircraft system built around entry-level PC hardware revealed single-thread compression and decompression throughputs of several hundred mebibytes per second for full-swing 16 and 32 bit RGB imagery at medium compression ratios. A degradation in image quality compared to popular compression libraries could be identified, however, at acceptable levels statistically and visually

    Using Passive Multi-Modal Sensor Data for Thermal Simulation of Urban Surfaces

    Get PDF
    This paper showcases an integrated workflow hinged on passive airborne multi-modal sensor data for the simulation of the thermal behavior of built-up areas with a focus on urban heat islands. The geometry of the underlying parametrized model, or digital twin, is derived from high-resolution nadir and oblique RGB, near-infrared and thermal infrared imagery. The captured bitmaps get photogrammetrically processed into comprehensive surface models, terrain, dense 3D point clouds and true-ortho mosaics. Building geometries are reconstructed from the projected point sets with procedures presupposing outlining, analysis of roof and fac¸ade details, triangulation, and texturing mapping. For thermal simulation, the composition of the ground is determined using supervised machine learning based on a modified multi-modal DeepLab v3+ architecture. Vegetation is retrieved as individual trees and larger tree regions to be added to the meshed terrain. Building materials are assigned from the available visual, infrared and surface planarity information as well as publicly available references. With actual weather data, surface temperatures can be calculated for any period of time by evaluating conductive, convective, radiative and emissive energy fluxes for triangular layers congruent to the faces of the modeled scene. Results on a sample dataset of the Moabit district in Berlin, Germany, showed the ability of the simulator to output surface temperatures of relatively large datasets efficiently. Compared to the thermal infrared images, several insufficiencies in terms of data and model caused occasional deviations between measured and simulated temperatures. For some of these shortcomings, improvement suggestions within future work are presented

    Building Tomograph – From Remote Sensing Data of Existing Buildings to Building Energy Simulation Input

    Get PDF
    Existing buildings often have low energy efficiency standards. For the preparation of retrofits, reliable high-quality data about the status quo is required. However, state-of-the-art analysis methods mainly rely on on-site inspections by experts and hence tend to be cost-intensive. In addition, some of the necessary devices need to be installed inside the buildings. As a consequence, owners hesitate to obtain sufficient information about potential refurbishment measures for their houses and underestimate possible savings. Remote sensing measurement technologies have the potential to provide an easy-to-use and automatable way to energetically analyze existing buildings objectively. To prepare an energetic simulation of the status quo and of possible retrofit scenarios, remote sensing data from different data sources have to be merged and combined with additional knowledge about the building. This contribution presents the current state of a project on the development of new and the optimization of conventional data acquisition methods for the energetic analysis of existing buildings solely based on contactless measurements, general information about the building, and data that residents can obtain with little effort. For the example of a single-family house in Morschenich, Germany, geometrical, semantical, and physical information are derived from photogrammetry and quantitative infrared measurements. Both are performed with the help of unmanned aerial vehicles (UAVs) and are compared to conventional methods for energy efficiency analysis regarding accuracy of and necessary effort for input data for building energy simulation. The concept of an object-oriented building model for measurement data processing is presented. Furthermore, an outlook is given on the project involving advanced remote sensing techniques such as ultrasound and microwave radar application for the measurement of additional energetic building parameters

    A Synthetic 3D Scene for the Validation of Photogrammetric Algorithms

    Get PDF
    This paper describes the construction and composition of a synthetic test world for the validation of photogrammetric algorithms. Since its 3D objects are entirely generated by software, the geometric accuracy of the scene does not suffer from measurement errors which existing real-world ground truth is inherently afflicted with. The resulting data set covers an area of 13188 by 6144 length units and exposes positional residuals as small as the machine epsilon of the double-precision floating point numbers used exclusively for the coordinates. It is colored with high-resolution textures to accommodate the simulation of virtual flight campaigns with large optical sensors and laser scanners in both aerial and close-range scenarios. To specifically support the derivation of image samples and point clouds, the synthetic scene gets stored in the human-readable Alias/Wavefront OBJ and POV-Ray data formats. While conventional rasterization remains possible, using the open-source ray tracer as a render tool facilitates the creation of ideal pinhole bitmaps, consistent digital surface models (DSMs), true ortho-mosaics (TOMs) and orientation metadata without programming knowledge. To demonstrate the application of the constructed 3D scene, example validation recipes are discussed in detail for a state-of-the-art implementation of semi-global matching and a perspective-correct multi-source texture mapper. For the latter, beyond the visual assessment, a statistical evaluation of the achieved texture quality is given

    Reconstructing Buildings with Discontinuities and Roof Overhangs from Oblique Aerial Imagery

    Get PDF
    This paper proposes a two-stage method for the reconstruction of city buildings with discontinuities and roof overhangs from oriented nadir and oblique aerial images. To model the structures the input data is transformed into a dense point cloud, segmented and filtered with a modified marching cubes algorithm to reduce the positional noise. Assuming a monolithic building the remaining vertices are initially projected onto a 2D grid and passed to RANSAC-based regression and topology analysis to geometrically determine finite wall, ground and roof planes. If this should fail due to the presence of discontinuities the regression will be repeated on a 3D level by traversing voxels within the regularly subdivided bounding box of the building point set. For each cube a planar piece of the current surface is approximated and expanded. The resulting segments get mutually intersected yielding both topological and geometrical nodes and edges. These entities will be eliminated if their distance-based affiliation to the defining point sets is violated leaving a consistent building hull including its structural breaks. To add the roof overhangs the computed polygonal meshes are projected onto the digital surface model derived from the point cloud. Their shapes are offset equally along the edge normals with subpixel accuracy by detecting the zero-crossings of the second-order directional derivative in the gradient direction of the height bitmap and translated back into world space to become a component of the building. As soon as the reconstructed objects are finished the aerial images are further used to generate a compact texture atlas for visualization purposes. An optimized atlas bitmap is generated that allows perspectivecorrect multi-source texture mapping without prior rectification involving a partially parallel placement algorithm. Moreover, the texture atlases undergo object-based image analysis (OBIA) to detect window areas which get reintegrated into the building models. To evaluate the performance of the proposed method a proof-of-concept test on sample structures obtained from real-world data of Heligoland/Germany has been conducted. It revealed good reconstruction accuracy in comparison to the cadastral map, a speed-up in texture atlas optimization and visually attractive render results

    Registration of very high resolution SAR and optical images

    Get PDF
    The combined use of high resolution SAR and optical images is of great interest, especially in complex scenes like urban areas. This requires accurate registration of images with big radiometric and geometric differences. This paper shows the accuracy that can be achieved by taking advantage of the geolocation accuracy of current sensors, instead of using more complex feature or intensity-based methods. An issue that can arise in layover areas is identified and a solution is shown. The approach is illustrated using very high resolution SAR and optical images

    Radar and Optical Image Fusion using Airborne Sensor Data from the Heligoland Island

    No full text
    An accurate geometrical alignment of remote sensing data is the basis for higher-level image processing techniques used to extract information. To fuse radar image data with other sensor data sources states a special case, because the coordinate system is based on the measured range which causes ambiguous regions due to layover effects. An accurate 3D representation of the scene is essential to find a fitting geometrical transformation between the respective sensor image spaces. This paper applies a method that accurately maps detailed 3D information of the German island of Heligoland to the slant-range-based coordinate system of radar images imaged by DLRs airborne F-SAR sensor. The highly accurate 3D information along with optical imagery has been acquired by DLRs airborne optical sensor system MACS
    corecore