4 research outputs found

    Automated Co-Registration of Intra-Epoch and Inter-Epoch Series of Multispectral Uav Images for Crop Monitoring

    No full text
    The application of UAV-based aerial imagery has advanced exponentially in the past two decades. This can be attributed to UAV operational flexibility, ultra-high spatial resolution, inexpensiveness, and UAV-based sensors enhancement. Nonetheless, the application of multitemporal series of multispectral UAV imagery still suffers significant misregistration errors, and therefore becoming a concern for applications such as precision agriculture. Direct image georeferencing and co-registration is commonly done using ground control points; this is usually costly and time consuming. This research proposes a novel approach for automatic co-registration of multitemporal UAV imagery using intensity-based keypoints. The Speeded Up Robust Features (SURF), Binary Robust Invariant Scalable Keypoints (BRISK), Maximally Stable Extremal Regions (MSER) and KAZE algorithms, were tested and parameters optimized. Image matching performance of these algorithms informed the decision to pursue further experiments with only SURF and KAZE. Optimally parametrized SURF and KAZE algorithms obtained co-registration accuracies of 0.1 and 0.3 pixels for intra-epoch and inter-epoch images respectively. To obtain better intra-epoch co-registration accuracy, collective band processing is advised whereas one-to-one matching strategy is recommended for inter-epoch co-registration. The results were tested using a maize crop monitoring case and the; comparison of spectral response of vegetation between the UAV sensors, Parrot Sequoia and Micro MCA was performed. Due to the missing incidence sensor, spectral and radiometric calibration of Micro MCA imagery is observed to be key in achieving optimal response. Also, the cameras have different specifications and thus differ in the quality of their respective photogrammetric outputs

    Exploring the potentials of UAV photogrammetric point clouds in facade detection and 3D reconstruction of buildings

    No full text
    The use of Airborne Laser Scanner (ALS) point clouds has dominated 3D buildings reconstruction research, thus giving photogrammetric point clouds less attention. Point cloud density, occlusion and vegetation cover are some of the concerns that promote the necessity to understand and question the completeness and correctness of UAV photogrammetric point clouds for 3D buildings reconstruction. This research explores the potentials of modelling 3D buildings from nadir and oblique UAV image data vis a vis airborne laser data. Optimal parameter settings for dense matching and reconstruction are analysed for both UAV image-based and lidar point clouds. This research employs an automatic data driven model approach to 3D building reconstruction. A proper segmentation into planar roof faces is crucial, followed by façade detection to capture the real extent of the buildings' roof overhang. An analysis of the quality of point density and point noise, in relation to setting parameter indicates that with a minimum of 50 points/m2, most of the planar surfaces are reconstructed comfortably. But for smaller features than dormers on the roof, a denser point cloud than 80 points/m2 is needed. 3D buildings from UAVs point cloud can be improved by enhancing roof boundary by use of edge information from images. It can also be improved by merging the imagery building outlines, point clouds roof boundary and the walls outline to extract the real extent of the building
    corecore