75 research outputs found

    Random Ferns for Semantic Segmentation of PolSAR Images

    Get PDF
    Random Ferns -- as a less known example of Ensemble Learning -- have been successfully applied in many Computer Vision applications ranging from keypoint matching to object detection. This paper extends the Random Fern framework to the semantic segmentation of polarimetric synthetic aperture radar images. By using internal projections that are defined over the space of Hermitian matrices, the proposed classifier can be directly applied to the polarimetric covariance matrices without the need to explicitly compute predefined image features. Furthermore, two distinct optimization strategies are proposed: The first based on pre-selection and grouping of internal binary features before the creation of the classifier; and the second based on iteratively improving the properties of a given Random Fern. Both strategies are able to boost the performance by filtering features that are either redundant or have a low information content and by grouping correlated features to best fulfill the independence assumptions made by the Random Fern classifier. Experiments show that results can be achieved that are similar to a more complex Random Forest model and competitive to a deep learning baseline.Comment: This is the author's version of the article as accepted for publication in IEEE Transactions on Geoscience and Remote Sensing, 2021. Link to original: https://ieeexplore.ieee.org/document/962798

    Automated image analysis for trajectory determination of single drop collisions

    Get PDF
    The fundamental analysis of drop coalescence probability in liquid/liquid systems is necessary to reliably predict drop size distributions in technical applications. For this crucial investigation two colliding oil drops in continuous water phase were recorded with different high speed camera set-ups under varying conditions. In order to analyze the huge amount of recorded image sequences with varying resolutions and qualities, a robust automated image analysis was developed. This analysis is able to determine the trajectories of two colliding drops as well as the important events of drop detachment from cannulas and their collision. With this information the drop velocity in each sequence is calculated and mean values of multiple drop collisions are determined for serial examinations of single drop collisions. Using the developed automated image analysis for drop trajectory and velocity calculation, approximately 1-2 recorded high speed image sequences can be evaluated per minute. (C) 2016 Elsevier Ltd. All rights reserved

    AI4SmallFarms: A data set for crop field delineation in Southeast Asian smallholder farms

    Get PDF
    Agricultural field polygons within smallholder farming systems are essential to facilitate the collection of geo-spatial data useful for farmers, managers, and policymakers. However, the limited availability of training labels poses a challenge in developing supervised methods to accurately delineate field boundaries using Earth observation (EO) data. This letter introduces an open dataset for training and benchmarking machine learning methods to delineate agricultural field boundaries in polygon format. The large-scale dataset consists of 439 001 field polygons divided into 62 tiles of approximately 5Ă— 5 km distributed across Vietnam and Cambodia, covering a range of fields and diverse landscape types. The field polygons have been meticulously digitized from satellite images, following a rigorous multistep quality control process and topological consistency checks. Multitemporal composites of Sentinel-2 (S2) images are provided to ensure cloud-free data. We conducted an experimental analysis testing a state-of-the-art deep learning (DL) workflow based on fully convolutional networks (FCNs), contour closing, and polygonization. We anticipate that this large-scale dataset will enable researchers to further enhance the delineation of agricultural fields in smallholder farms and to support the achievement of the Sustainable Development Goals (SDGs). The dataset can be downloaded from https://doi.org/10.17026/dans-xy6-ngg6.Management Suppor

    Global-scale mapping of periglacial landforms on Earth and Mars

    Get PDF
    We are developing a machine learning system based on high-resolution images of Earth and Mars for classifying periglacial landscape features, detecting their temporal changes, and assessing their global distribution as well as their potential as indicator for climate conditions and changes. Earth periglacial landscape phenomena such as ice wedge polygons are closely linked to repeated freeze-thaw cycles, and the presence of water and ice in the subsurface. Ice wedge polygons, which are widespread in Arctic lowlands, constitute an important indicator for ground ice content. Ground ice makes permafrost vulnerable to thaw and subsidence, thus leading to massive changes in topography, hydrology, and biogeochemical processes [1]. Moreover, variations in permafrost extent due to climate warming in Earth’s polar regions cause changes in the aforementioned periglacial features. On Mars, similar young landforms such as ice-wedge polygons and debris flows are found [2]. Large volumes of excess ice are known to exist in the shallow subsurface of mid-latitude regions [3]. A major debate focusses on whether similar freeze-thaw cycles thawed this excess ice in the geologically recent past. If true, this would be conflicting with the current Martian environment, which ostensibly prevents the generation of liquid water, and would therefore have implications for the recent hydrologic past of Mars. With liquid water also intrinsically linked to the climate evolution and the potential habitability of Mars, the investigation of the aforementioned landforms becomes essential. Moreover, the present-day surface of Mars experiences changes linked to H2O and CO2 ice, which are unlikely to be the result of aqueous processes [4, 5]. Detecting the magnitude and timing of these changes would enable the estimation of the related process rates [6, 7] and the testing of hypotheses regarding the formation mechanism. Quantification of periglacial features on regional- to global-scale has not been done for either of the planets so far. These features as well as their changes can be tracked with high resolution remote sensing across large regions using big data approaches of image processing, classification and feature detection. For Earth the detection and tracking of periglacial features would provide invaluable insights into periglacial and permafrost dynamics as well as in the potential for permafrost vulnerabilities to thaw in a warming world. For Mars, determining the distribution of such landforms, as well as the spatial relationship to each other and to external parameters such as topography, would provide clues for their formation and thus elucidate the role of liquid water in the recent past. The key task is to map selected features across large regions using large high-resolution datasets, which makes automated methods for detection and classification of landforms essential. We are developing a powerful machine learning system that will identify the most appropriate image features for each landform. It will be trained on images for different cases of periglacial landforms. Ice wedge polygons are morphologically very similar on both planets, which besides the opportunity to conduct analogue studies, also provides the possibility to combine training datasets from both planets. Our system will be validated by manual identification of periglacial features on images, as well as with the existing validation datasets of both planets. Our project is exploring the potential for a machine learning system to detect periglacial phenomena by exploiting big datasets of large regions of Earth and Mars. The resulting global-scale mapping of ice wedge polygons will provide insights regarding the permafrost vulnerabilities to changing climates on Earth, as well as about the recent role of liquid water on Mars. The former are linked to life and biogeochemical processes on Earth, while the latter to the evolution of climate and potential habitability of Mars

    Advanced Multi-Sensor Optical Remote Sensing for Urban Land Use and Land Cover Classification: Outcome of the 2018 IEEE GRSS Data Fusion Contest

    Get PDF
    This paper presents the scientific outcomes of the 2018 Data Fusion Contest organized by the Image Analysis and Data Fusion Technical Committee of the IEEE Geoscience and Remote Sensing Society. The 2018 Contest addressed the problem of urban observation and monitoring with advanced multi-source optical remote sensing (multispectral LiDAR, hyperspectral imaging, and very high-resolution imagery). The competition was based on urban land use and land cover classification, aiming to distinguish between very diverse and detailed classes of urban objects, materials, and vegetation. Besides data fusion, it also quantified the respective assets of the novel sensors used to collect the data. Participants proposed elaborate approaches rooted in remote-sensing, and also in machine learning and computer vision, to make the most of the available data. Winning approaches combine convolutional neural networks with subtle earth-observation data scientist expertise
    • …
    corecore