19 research outputs found

    Assessment of boreal forest height from WorldView-2 satellite stereo images

    Get PDF
    WorldView-2 (WV2) satellite stereo images were used to derive a digital surface model, which together with a high-resolution digital terrain model from airborne laser scanning (ALS) were used to estimate forest height. Lorey's mean height (H-L) could be estimated with a root mean square error of 1.5 m (8.3%) and 1.4 m (10.4%), using linear regression, at the two Swedish test sites Remningstorp (Lat. 58 degrees 30'N, Long. 13 degrees 40'E) and Krycklan (Lat. 64 degrees 16'N, Long. 19 degrees 46'E), which contain hemi-boreal and boreal forest. The correlation coefficients were r = 0.94 and r = 0.91, respectively. The 10 m sample plots were 175 in Remningstorp and 282 in Krycklan. It was furthermore found that WV2 data are sometimes unstable for canopy top height estimations (ALS height percentile 100, p100) and that the reconstructed heights are generally located below the actual top height. The WV2 p60 was found to correlate best with ALS p70 in Remningstorp, while WV2 p95 was found to correlate best with ALS p70 in Krycklan, and it moreover reached the highest correlation for all other estimated variables, at both test sites. It was concluded that WV2 p95 height data overall represent approximately the forest height ALS p70. The overall high correlation coefficients above 0.90 at both test sites, with different forest conditions, indicate that stereo matching of WV2 satellite images is suitable for forest height mapping

    Efficient implementation of higher order image interpolation

    Get PDF
    This work presents a new method of fast cubic and higher order image interpolation. The evaluation of the piecewise n-th order polynomial kernels is accelerated by transforming the polynomials into the interval [0,1], which has the advantage that some terms of the polynomials disappear, and that several coefficients could be precalculated, which is proven in the paper. The results are exactly the same as using standard n-th order interpolation, but the computational complexity is reduced. Calculating the interpolation weights for the cubic convolution only needs about 60% of the time compared to the classical method optimized by the Horner's rule. This allows a new efficient implementation for image interpolation

    EFFICIENT IMPLEMENTATION OF HIGHER ORDER IMAGE INTERPOLATION

    No full text
    This work presents a new method of fast cubic and higher order image interpolation. The evaluation of the piecewise n-th order polynomial kernels is accelerated by transforming the polynomials into the interval [0, 1], which has the advantage that some terms of the polynomials disappear, and that several coefficients could be precalculated, which is proven in the paper. The results are exactly the same as using standard n-th order interpolation, but the computational complexity is reduced. Calculating the interpolation weights for the cubic convolution only needs about 60 % of the time compared to the classical method optimized by the Horner’s rule. This allows a new efficient implementation for image interpolation

    Mapping with Pléiades—End-to-End Workflow

    No full text
    In this work, we introduce an end-to-end workflow for very high-resolution satellite-based mapping, building the basis for important 3D mapping products: (1) digital surface model, (2) digital terrain model, (3) normalized digital surface model and (4) ortho-rectified image mosaic. In particular, we describe all underlying principles for satellite-based 3D mapping and propose methods that extract these products from multi-view stereo satellite imagery. Our workflow is demonstrated for the Pléiades satellite constellation, however, the applied building blocks are more general and thus also applicable for different setups. Besides introducing the overall end-to-end workflow, we need also to tackle single building blocks: optimization of sensor models represented by rational polynomials, epipolar rectification, image matching, spatial point intersection, data fusion, digital terrain model derivation, ortho rectification and ortho mosaicing. For each of these steps, extensions to the state-of-the-art are proposed and discussed in detail. In addition, a novel approach for terrain model generation is introduced. The second aim of the study is a detailed assessment of the resulting output products. Thus, a variety of data sets showing different acquisition scenarios are gathered, allover comprising 24 Pléiades images. First, the accuracies of the 2D and 3D geo-location are analyzed. Second, surface and terrain models are evaluated, including a critical look on the underlying error metrics and discussing the differences of single stereo, tri-stereo and multi-view data sets. Overall, 3D accuracies in the range of 0.2 to 0.3 m in planimetry and 0.2 to 0.4 m in height are achieved w.r.t. ground control points. Retrieved surface models show normalized median absolute deviations around 0.9 m in comparison to reference LiDAR data. Multi-view stereo outperforms single stereo in terms of accuracy and completeness of the resulting surface models

    Geometrical accuracy of Bayer pattern images

    Get PDF
    Modern digital still cameras sample the color spectrum using a color filter array coated to the CCD array such that each pixel samples only one color channel. The result is a mosaic of color samples which is used to reconstruct the full color image by taking the information of the pixels’ neighborhood. This process is called demosaicking. While standard literature evaluates the performance of these reconstruction algorithms by comparison of a ground-truth image with a reconstructed Bayer pattern image in terms of grayscale comparison, this work gives an evaluation concept to asses the geometrical accuracy of the resulting color images. Only if no geometrical distortions are created during the demosaicking process, it is allowed to use such images for metric calculations, e.g. 3D reconstruction or arbitrary metrical photogrammetric processing

    Critical Aspects of Person Counting and Density Estimation

    No full text
    Many scientific studies deal with person counting and density estimation from single images. Recently, convolutional neural networks (CNNs) have been applied for these tasks. Even though often better results are reported, it is often not clear where the improvements are resulting from, and if the proposed approaches would generalize. Thus, the main goal of this paper was to identify the critical aspects of these tasks and to show how these limit state-of-the-art approaches. Based on these findings, we show how to mitigate these limitations. To this end, we implemented a CNN-based baseline approach, which we extended to deal with identified problems. These include the discovery of bias in the reference data sets, ambiguity in ground truth generation, and mismatching of evaluation metrics w.r.t. the training loss function. The experimental results show that our modifications allow for significantly outperforming the baseline in terms of the accuracy of person counts and density estimation. In this way, we get a deeper understanding of CNN-based person density estimation beyond the network architecture. Furthermore, our insights would allow to advance the field of person density estimation in general by highlighting current limitations in the evaluation protocols

    Critical Aspects of Person Counting and Density Estimation

    No full text
    Many scientific studies deal with person counting and density estimation from single images. Recently, convolutional neural networks (CNNs) have been applied for these tasks. Even though often better results are reported, it is often not clear where the improvements are resulting from, and if the proposed approaches would generalize. Thus, the main goal of this paper was to identify the critical aspects of these tasks and to show how these limit state-of-the-art approaches. Based on these findings, we show how to mitigate these limitations. To this end, we implemented a CNN-based baseline approach, which we extended to deal with identified problems. These include the discovery of bias in the reference data sets, ambiguity in ground truth generation, and mismatching of evaluation metrics w.r.t. the training loss function. The experimental results show that our modifications allow for significantly outperforming the baseline in terms of the accuracy of person counts and density estimation. In this way, we get a deeper understanding of CNN-based person density estimation beyond the network architecture. Furthermore, our insights would allow to advance the field of person density estimation in general by highlighting current limitations in the evaluation protocols
    corecore