21 research outputs found

    Bigearthnet: A Large-Scale Benchmark Archive for Remote Sensing Image Understanding

    Get PDF
    © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.This paper presents the BigEarthNet that is a new large-scale multi-label Sentinel-2 benchmark archive. The BigEarthNet consists of 590, 326 Sentinel-2 image patches, each of which is a section of i) 120 × 120 pixels for 10m bands; ii) 60×60 pixels for 20m bands; and iii) 20×20 pixels for 60m bands. Unlike most of the existing archives, each image patch is annotated by multiple land-cover classes (i.e., multi-labels) that are provided from the CORINE Land Cover database of the year 2018 (CLC 2018). The BigEarthNet is significantly larger than the existing archives in remote sensing (RS) and thus is much more convenient to be used as a training source in the context of deep learning. This paper first addresses the limitations of the existing archives and then describes the properties of the BigEarthNet. Experimental results obtained in the framework of RS image scene classification problems show that a shallow Convolutional Neural Network (CNN) architecture trained on the BigEarthNet provides much higher accuracy compared to a state-of-the-art CNN model pre-trained on the ImageNet (which is a very popular large-scale benchmark archive in computer vision). The BigEarthNet opens up promising directions to advance operational RS applications and research in massive Sentinel-2 image archives.EC/H2020/759764/EU/Accurate and Scalable Processing of Big Data in Earth Observation/BigEarthBMBF, 01IS14013A, Verbundprojekt: BBDC - Berliner Kompetenzzentrum für Big Dat

    Predicting Crop Yield With Machine Learning: An Extensive Analysis Of Input Modalities And Models On a Field and sub-field Level

    Full text link
    We introduce a simple yet effective early fusion method for crop yield prediction that handles multiple input modalities with different temporal and spatial resolutions. We use high-resolution crop yield maps as ground truth data to train crop and machine learning model agnostic methods at the sub-field level. We use Sentinel-2 satellite imagery as the primary modality for input data with other complementary modalities, including weather, soil, and DEM data. The proposed method uses input modalities available with global coverage, making the framework globally scalable. We explicitly highlight the importance of input modalities for crop yield prediction and emphasize that the best-performing combination of input modalities depends on region, crop, and chosen model.Comment: 4 pages, 1 figure, 3 tables, IEEE IGARSS 202

    QUALITY CONTROL OF AUTOMATIC LABELLING USING HMM-BASED SYNTHESIS

    No full text
    This paper presents a measure to verify the quality of automatically aligned phone labels. The measure is based on a similarity cost between automatically generated phonetic segments and phonetic segments generated by an HMM-based synthesiser. We investigate the effectiveness of the measure for identifying problems of three types: alignment errors, phone identity problems and noise insertion. Our experiments show that the measure is best at finding noise errors, followed by phone identity mismatches and serious misalignments
    corecore