398 research outputs found

    Machine Learning for Informed Representation Learning

    Get PDF
    The way we view reality and reason about the processes surrounding us is intimately connected to our perception and the representations we form about our observations and experiences. The popularity of machine learning and deep learning techniques in that regard stems from their ability to form useful representations by learning from large sets of observations. Typical application examples include image recognition or language processing for which artificial neural networks are powerful tools to extract regularity patterns or relevant statistics. In this thesis, we leverage and further develop this representation learning capability to address relevant but challenging real-world problems in geoscience and chemistry, to learn representations in an informed manner relevant to the task at hand, and reason about representation learning in neural networks, in general. Firstly, we develop an approach for efficient and scalable semantic segmentation of degraded soil in alpine grasslands in remotely-sensed images based on convolutional neural networks. To this end, we consider different grassland erosion phenomena in several Swiss valleys. We find that we are able to monitor soil degradation consistent with state-of-the-art methods in geoscience and can improve detection of affected areas. Furthermore, our approach provides a scalable method for large-scale analysis which is infeasible with established methods. Secondly, we address the question of how to identify suitable latent representations to enable generation of novel objects with selected properties. For this, we introduce a new deep generative model in the context of manifold learning and disentanglement. Our model improves targeted generation of novel objects by making use of property cycle consistency in property-relevant and property-invariant latent subspaces. We demonstrate the improvements on the generation of molecules with desired physical or chemical properties. Furthermore, we show that our model facilitates interpretability and exploration of the latent representation. Thirdly, in the context of recent advances in deep learning theory and the neural tangent kernel, we empirically investigate the learning of feature representations in standard convolutional neural networks and corresponding random feature models given by the linearisation of the neural networks. We find that performance differences between standard and linearised networks generally increase with the difficulty of the task but decrease with the considered width or over-parametrisation of these networks. Our results indicate interesting implications for feature learning and random feature models as well as the generalisation performance of highly over-parametrised neural networks. In summary, we employ and study feature learning in neural networks and review how we may use informed representation learning for challenging tasks

    A survey of image-based computational learning techniques for frost detection in plants

    Get PDF
    Frost damage is one of the major concerns for crop growers as it can impact the growth of the plants and hence, yields. Early detection of frost can help farmers mitigating its impact. In the past, frost detection was a manual or visual process. Image-based techniques are increasingly being used to understand frost development in plants and automatic assessment of damage resulting from frost. This research presents a comprehensive survey of the state-of the-art methods applied to detect and analyse frost stress in plants. We identify three broad computational learning approaches i.e., statistical, traditional machine learning and deep learning, applied to images to detect and analyse frost in plants. We propose a novel taxonomy to classify the existing studies based on several attributes. This taxonomy has been developed to classify the major characteristics of a significant body of published research. In this survey, we profile 80 relevant papers based on the proposed taxonomy. We thoroughly analyse and discuss the techniques used in the various approaches, i.e., data acquisition, data preparation, feature extraction, computational learning, and evaluation. We summarise the current challenges and discuss the opportunities for future research and development in this area including in-field advanced artificial intelligence systems for real-time frost monitoring

    ELULC-10, a 10 m European land use and land cover map using Sentinel and landsat data in Google Earth Engine

    Get PDF
    Land Use/Land Cover (LULC) maps can be effectively produced by cost-effective and frequent satellite observations. Powerful cloud computing platforms are emerging as a growing trend in the high utilization of freely accessible remotely sensed data for LULC mapping over large-scale regions using big geodata. This study proposes a workflow to generate a 10 m LULC map of Europe with nine classes, ELULC-10, using European Sentinel-1/-2 and Landsat-8 images, as well as the LUCAS reference samples. More than 200 K and 300 K of in situ surveys and images, respectively, were employed as inputs in the Google Earth Engine (GEE) cloud computing platform to perform classification by an object-based segmentation algorithm and an Artificial Neural Network (ANN). A novel ANN-based data preparation was also presented to remove noisy reference samples from the LUCAS dataset. Additionally, the map was improved using several rule-based post-processing steps. The overall accuracy and kappa coefficient of 2021 ELULC-10 were 95.38% and 0.94, respectively. A detailed report of the classification accuracies was also provided, demonstrating an accurate classification of different classes, such as Woodland and Cropland. Furthermore, rule-based post processing improved LULC class identifications when compared with current studies. The workflow could also supply seasonal, yearly, and change maps considering the proposed integration of complex machine learning algorithms and large satellite and survey data.Peer ReviewedPostprint (published version

    Identifying and mapping individual plants in a highly diverse high-elevation ecosystem using UAV imagery and deep learning

    Get PDF
    The identification and counting of plant individuals is essential for environmental monitoring. UAV based imagery offer ultra-fine spatial resolution and flexibility in data acquisition, and so provide a great opportunity to enhance current plant and in-situ field surveying. However, accurate mapping of individual plants from UAV imagery remains challenging, given the great variation in the sizes and geometries of individual plants and in their distribution. This is true even for deep learning based semantic segmentation and classification methods. In this research, a novel Scale Sequence Residual U-Net (SS Res U-Net) deep learning method was proposed, which integrates a set of Residual U-Nets with a sequence of input scales that can be derived automatically. The SS Res U-Net classifies individual plants by continuously increasing the patch scale, with features learned at small scales passing gradually to larger scales, thus, achieving multi-scale information fusion while retaining fine spatial details of interest. The SS Res U-Net was tested to identify and map frailejones (all plant species of the subtribe Espeletiinae), the dominant plants in one of the world’s most biodiverse high-elevation ecosystems (i.e. the páramos) from UAV imagery. Results demonstrate that the SS Res U-Net has the ability to self-adapt to variation in objects, and consistently achieved the highest classification accuracy (91.67% on average) compared with four state-of-the-art benchmark approaches. In addition, SS Res U-Net produced the best performances in terms of both robustness to training sample size reduction and computational efficiency compared with the benchmarks. Thus, SS Res U-Net shows great promise for solving remotely sensed semantic segmentation and classification tasks, and more general machine intelligence. The prospective implementation of this method to identify and map frailejones in the páramos will benefit immensely the monitoring of their populations for conservation assessments and management, among many other applications

    A deep semantic vegetation health monitoring platform for citizen science imaging data

    Get PDF
    Automated monitoring of vegetation health in a landscape is often attributed to calculating values of various vegetation indexes over a period of time. However, such approaches suffer from an inaccurate estimation of vegetational change due to the over-reliance of index values on vegetation’s colour attributes and the availability of multi-spectral bands. One common observation is the sensitivity of colour attributes to seasonal variations and imaging devices, thus leading to false and inaccurate change detection and monitoring. In addition, these are very strong assumptions in a citizen science project. In this article, we build upon our previous work on developing a Semantic Vegetation Index (SVI) and expand it to introduce a semantic vegetation health monitoring platform to monitor vegetation health in a large landscape. However, unlike our previous work, we use RGB images of the Australian landscape for a quarterly series of images over six years (2015–2020). This Semantic Vegetation Index (SVI) is based on deep semantic segmentation to integrate it with a citizen science project (Fluker Post) for automated environmental monitoring. It has collected thousands of vegetation images shared by various visitors from around 168 different points located in Australian regions over six years. This paper first uses a deep learning-based semantic segmentation model to classify vegetation in repeated photographs. A semantic vegetation index is then calculated and plotted in a time series to reflect seasonal variations and environmental impacts. The results show variational trends of vegetation cover for each year, and the semantic segmentation model performed well in calculating vegetation cover based on semantic pixels (overall accuracy = 97.7%). This work has solved a number of problems related to changes in viewpoint, scale, zoom, and seasonal changes in order to normalise RGB image data collected from different image devices

    Remote Sensing of the Aquatic Environments

    Get PDF
    The book highlights recent research efforts in the monitoring of aquatic districts with remote sensing observations and proximal sensing technology integrated with laboratory measurements. Optical satellite imagery gathered at spatial resolutions down to few meters has been used for quantitative estimations of harmful algal bloom extent and Chl-a mapping, as well as winds and currents from SAR acquisitions. The knowledge and understanding gained from this book can be used for the sustainable management of bodies of water across our planet
    • …
    corecore