26 research outputs found

    AI4Boundaries: an open AI-ready dataset to map field boundaries with Sentinel-2 and aerial photography

    Get PDF
    Field boundaries are at the core of many agricultural applications and are a key enabler for the operational monitoring of agricultural production to support food security. Recent scientific progress in deep learning methods has highlighted the capacity to extract field boundaries from satellite and aerial images with a clear improvement from object-based image analysis (e.g. multiresolution segmentation) or conventional filters (e.g. Sobel filters). However, these methods need labels to be trained on. So far, no standard data set exists to easily and robustly benchmark models and progress the state of the art. The absence of such benchmark data further impedes proper comparison against existing methods. Besides, there is no consensus on which evaluation metrics should be reported (both at the pixel and field levels). As a result, it is currently impossible to compare and benchmark new and existing methods. To fill these gaps, we introduce AI4Boundaries, a data set of images and labels readily usable to train and compare models on field boundary detection. AI4Boundaries includes two specific data sets: (i) a 10 m Sentinel-2 monthly composites for large-scale analyses in retrospect and (ii) a 1 m orthophoto data set for regional-scale analyses, such as the automatic extraction of Geospatial Aid Application (GSAA). All labels have been sourced from GSAA data that have been made openly available (Austria, Catalonia, France, Luxembourg, the Netherlands, Slovenia, and Sweden) for 2019, representing 14.8 M parcels covering 376 K km2. Data were selected following a stratified random sampling drawn based on two landscape fragmentation metrics, the perimeter/area ratio and the area covered by parcels, thus considering the diversity of the agricultural landscapes. The resulting “AI4Boundaries” dataset consists of 7831 samples of 256 by 256 pixels for the 10 m Sentinel-2 dataset and of 512 by 512 pixels for the 1 m aerial orthophoto. Both datasets are provided with the corresponding vector ground-truth parcel delineation (2.5 M parcels covering 47 105 km2), and with a raster version already pre-processed and ready to use. Besides providing this open dataset to foster computer vision developments of parcel delineation methods, we discuss the perspectives and limitations of the dataset for various types of applications in the agriculture domain and consider possible further improvements. The data are available on the JRC Open Data Catalogue: http://data.europa.eu/89h/0e79ce5d-e4c8-4721-8773-59a4acf2c9c9 (European Commission, Joint Research Centre, 2022).</p

    Improving automatic delineation for head and neck organs at risk by Deep Learning Contouring

    Get PDF
    INTRODUCTION: Adequate head and neck (HN) organ-at-risk (OAR) delineation is crucial for HN radiotherapy and for investigating the relationships between radiation dose to OARs and radiation-induced side effects. The automatic contouring algorithms that are currently in clinical use, such as atlas-based contouring (ABAS), leave room for improvement. The aim of this study was to use a comprehensive evaluation methodology to investigate the performance of HN OAR auto-contouring when using deep learning contouring (DLC), compared to ABAS. METHODS: The DLC neural network was trained on 589 HN cancer patients. DLC was compared to ABAS by providing each method with an independent validation cohort of 104 patients, which had also been manually contoured. For each of the 22 OAR contours - glandular, upper digestive tract and central nervous system (CNS)-related structures - the dice similarity coefficient (DICE), and absolute mean and max dose differences (|Δmean-dose| and |Δmax-dose|) performance measures were obtained. For a subset of 7 OARs, an evaluation of contouring time, inter-observer variation and subjective judgement was performed. RESULTS: DLC resulted in equal or significantly improved quantitative performance measures in 19 out of 22 OARs, compared to the ABAS (DICE/|Δmean dose|/|Δmax dose|: 0.59/4.2/4.1 Gy (ABAS); 0.74/1.1/0.8 Gy (DLC)). The improvements were mainly for the glandular and upper digestive tract OARs. DLC significantly reduced the delineation time for the inexperienced observer. The subjective evaluation showed that DLC contours were more often preferable to the ABAS contours overall, were considered to be more precise, and more often confused with manual contours. Manual contours still outperformed both DLC and ABAS; however, DLC results were within or bordering the inter-observer variability for the manual edited contours in this cohort. CONCLUSION: The DLC, trained on a large HN cancer patient cohort, outperformed the ABAS for the majority of HN OARs

    Cloud Mask Intercomparison eXercise (CMIX): An evaluation of cloud masking algorithms for Landsat 8 and Sentinel-2

    Get PDF
    Cloud cover is a major limiting factor in exploiting time-series data acquired by optical spaceborne remote sensing sensors. Multiple methods have been developed to address the problem of cloud detection in satellite imagery and a number of cloud masking algorithms have been developed for optical sensors but very few studies have carried out quantitative intercomparison of state-of-the-art methods in this domain. This paper summarizes results of the first Cloud Masking Intercomparison eXercise (CMIX) conducted within the Committee Earth Observation Satellites (CEOS) Working Group on Calibration & Validation (WGCV). CEOS is the forum for space agency coordination and cooperation on Earth observations, with activities organized under working groups. CMIX, as one such activity, is an international collaborative effort aimed at intercomparing cloud detection algorithms for moderate-spatial resolution (10–30 m) spaceborne optical sensors. The focus of CMIX is on open and free imagery acquired by the Landsat 8 (NASA/USGS) and Sentinel-2 (ESA) missions. Ten algorithms developed by nine teams from fourteen different organizations representing universities, research centers and industry, as well as space agencies (CNES, ESA, DLR, and NASA), are evaluated within the CMIX. Those algorithms vary in their approach and concepts utilized which were based on various spectral properties, spatial and temporal features, as well as machine learning methods. Algorithm outputs are evaluated against existing reference cloud mask datasets. Those datasets vary in sampling methods, geographical distribution, sample unit (points, polygons, full image labels), and generation approaches (experts, machine learning, sky images). Overall, the performance of algorithms varied depending on the reference dataset, which can be attributed to differences in how the reference datasets were produced. The algorithms were in good agreement for thick cloud detection, which were opaque and had lower uncertainties in their identification, in contrast to thin/semi-transparent clouds detection. Not only did CMIX allow identification of strengths and weaknesses of existing algorithms and potential areas of improvements, but also the problems associated with the existing reference datasets. The paper concludes with recommendations on generating new reference datasets, metrics, and an analysis framework to be further exploited and additional input datasets to be considered by future CMIX activities

    Registration of multiview echocardiography sequences using a subspace error metric

    Get PDF

    Overview of atlas-based segmentation approaches

    No full text
    Overview of approaches to atlas-based segmentation. Left: single-atlas only, Center-left: multi-atlas fusion, Center-right: single-atlas selection, Right: multi-atlas selection and fusion

    Image-based real-time motion gating of 3D cardiac ultrasound images

    Get PDF
    corecore