50 research outputs found

    Shallow Water Bathymetry Mapping from UAV Imagery based on Machine Learning

    Get PDF
    The determination of accurate bathymetric information is a key element for near offshore activities, hydrological studies such as coastal engineering applications, sedimentary processes, hydrographic surveying as well as archaeological mapping and biological research. UAV imagery processed with Structure from Motion (SfM) and Multi View Stereo (MVS) techniques can provide a low-cost alternative to established shallow seabed mapping techniques offering as well the important visual information. Nevertheless, water refraction poses significant challenges on depth determination. Till now, this problem has been addressed through customized image-based refraction correction algorithms or by modifying the collinearity equation. In this paper, in order to overcome the water refraction errors, we employ machine learning tools that are able to learn the systematic underestimation of the estimated depths. In the proposed approach, based on known depth observations from bathymetric LiDAR surveys, an SVR model was developed able to estimate more accurately the real depths of point clouds derived from SfM-MVS procedures. Experimental results over two test sites along with the performed quantitative validation indicated the high potential of the developed approach.Comment: 8 pages, 9 figure

    Ultra-high resolution sampling with UAVs for optimising fractional woody cover characterisations in dryland savannahs

    Get PDF
    Dryland savannahs are crucial for understanding carbon cycling and storage and for their provision of ecosystem services. Globally, the accurate mapping of the woody savannah component and its characteristics is especially important as it provides input to carbon emissions models. Moreover, in the southern African region, the encroachment of unpalatable woody species over large expanses of palatable grasses has received a lot of attention as it directly affects the livelihoods of local populations. Over these scales, Earth observation technologies are seen as the only viable means for mapping and monitoring the characteristics of woody vegetation. However, the commonly applied sampling and validation approach incorporating point woody samples identified over aerial photography or very-high resolution data (e.g. via Google Earth) is problematic as the satellite data used for the mapping, with a pixel size of 10 – 30 m, rarely consists of pure woody vegetation. To bridge this spatial gap between what is identified in the point-based samples and what is included in the 10-30m pixel, we employ a UAV-based 2D and 3D sampling strategy. We incorporate point samples collected from Google Earth in a 400km2 area of the Northwest Province of South Africa together with UAV-collected RGB and 3D mosaics, in order to optimise the mapping of fractional woody cover. We test the approach using both Landsat-8 and Sentinel-2 data in order to assess the applicability at both 10 and 30m scales. We also test the accuracy of two different machine learning classification approaches: random forests and support vector machines. Our 2D/3D UAV-based sampling approach provides higher fractional woody cover classification results than simply incorporating the ‘traditional’ point samples from aerial photography or Google Earth. Our suggested methodology can provide much needed assistance to fractional woody vegetation monitoring efforts in Southern African savannahs where the process is partly related with bush encroachment and land degradation

    Multitemporal Very High Resolution from Space: Outcome of the 2016 IEEE GRSS Data Fusion Contest

    Get PDF
    In this paper, the scientific outcomes of the 2016 Data Fusion Contest organized by the Image Analysis and Data Fusion Technical Committee of the IEEE Geoscience and Remote Sensing Society are discussed. The 2016 Contest was an open topic competition based on a multitemporal and multimodal dataset, which included a temporal pair of very high resolution panchromatic and multispectral Deimos-2 images and a video captured by the Iris camera on-board the International Space Station. The problems addressed and the techniques proposed by the participants to the Contest spanned across a rather broad range of topics, and mixed ideas and methodologies from the remote sensing, video processing, and computer vision. In particular, the winning team developed a deep learning method to jointly address spatial scene labeling and temporal activity modeling using the available image and video data. The second place team proposed a random field model to simultaneously perform coregistration of multitemporal data, semantic segmentation, and change detection. The methodological key ideas of both these approaches and the main results of the corresponding experimental validation are discussed in this paper

    SHALLOW WATER BATHYMETRY MAPPING FROM UAV IMAGERY BASED ON MACHINE LEARNING

    Get PDF
    The determination of accurate bathymetric information is a key element for near offshore activities, hydrological studies such as coastal engineering applications, sedimentary processes, hydrographic surveying as well as archaeological mapping and biological research. UAV imagery processed with Structure from Motion (SfM) and Multi View Stereo (MVS) techniques can provide a low-cost alternative to established shallow seabed mapping techniques offering as well the important visual information. Nevertheless, water refraction poses significant challenges on depth determination. Till now, this problem has been addressed through customized image-based refraction correction algorithms or by modifying the collinearity equation. In this paper, in order to overcome the water refraction errors, we employ machine learning tools that are able to learn the systematic underestimation of the estimated depths. In the proposed approach, based on known depth observations from bathymetric LiDAR surveys, an SVR model was developed able to estimate more accurately the real depths of point clouds derived from SfM-MVS procedures. Experimental results over two test sites along with the performed quantitative validation indicated the high potential of the developed approach

    Recognition-Driven Two-Dimensional Competing Priors Toward Automatic and Accurate Building Detection

    No full text

    Higher order polynomials, free form deformations and optical flow estimation

    No full text

    PRECISION VITICULTURE FROM MULTITEMPORAL, MULTISPECTRAL VERY HIGH RESOLUTION SATELLITE DATA

    No full text
    In order to exploit efficiently very high resolution satellite multispectral data for precision agriculture applications, validated methodologies should be established which link the observed reflectance spectra with certain crop/plant/fruit biophysical and biochemical quality parameters. To this end, based on concurrent satellite and field campaigns during the veraison period, satellite and in-situ data were collected, along with several grape samples, at specific locations during the harvesting period. These data were collected for a period of three years in two viticultural areas in Northern Greece. After the required data pre-processing, canopy reflectance observations, through the combination of several vegetation indices were correlated with the quantitative results from the grape/must analysis of grape sampling. Results appear quite promising, indicating that certain key quality parameters (like brix levels, total phenolic content, brix to total acidity, anthocyanin levels) which describe the oenological potential, phenolic composition and chromatic characteristics can be efficiently estimated from the satellite data
    corecore