1,525 research outputs found

    Imaging and Classification Techniques for Seagrass Mapping and Monitoring: A Comprehensive Survey

    Full text link
    Monitoring underwater habitats is a vital part of observing the condition of the environment. The detection and mapping of underwater vegetation, especially seagrass has drawn the attention of the research community as early as the nineteen eighties. Initially, this monitoring relied on in situ observation by experts. Later, advances in remote-sensing technology, satellite-monitoring techniques and, digital photo- and video-based techniques opened a window to quicker, cheaper, and, potentially, more accurate seagrass-monitoring methods. So far, for seagrass detection and mapping, digital images from airborne cameras, spectral images from satellites, acoustic image data using underwater sonar technology, and digital underwater photo and video images have been used to map the seagrass meadows or monitor their condition. In this article, we have reviewed the recent approaches to seagrass detection and mapping to understand the gaps of the present approaches and determine further research scope to monitor the ocean health more easily. We have identified four classes of approach to seagrass mapping and assessment: still image-, video data-, acoustic image-, and spectral image data-based techniques. We have critically analysed the surveyed approaches and found the research gaps including the need for quick, cheap and effective imaging techniques robust to depth, turbidity, location and weather conditions, fully automated seagrass detectors that can work in real-time, accurate techniques for estimating the seagrass density, and the availability of high computation facilities for processing large scale data. For addressing these gaps, future research should focus on developing cheaper image and video data collection techniques, deep learning based automatic annotation and classification, and real-time percentage-cover calculation.Comment: 36 pages, 14 figures, 8table

    A Review of Visual Descriptors and Classification Techniques Used in Leaf Species Identification

    Full text link
    Plants are fundamentally important to life. Key research areas in plant science include plant species identification, weed classification using hyper spectral images, monitoring plant health and tracing leaf growth, and the semantic interpretation of leaf information. Botanists easily identify plant species by discriminating between the shape of the leaf, tip, base, leaf margin and leaf vein, as well as the texture of the leaf and the arrangement of leaflets of compound leaves. Because of the increasing demand for experts and calls for biodiversity, there is a need for intelligent systems that recognize and characterize leaves so as to scrutinize a particular species, the diseases that affect them, the pattern of leaf growth, and so on. We review several image processing methods in the feature extraction of leaves, given that feature extraction is a crucial technique in computer vision. As computers cannot comprehend images, they are required to be converted into features by individually analysing image shapes, colours, textures and moments. Images that look the same may deviate in terms of geometric and photometric variations. In our study, we also discuss certain machine learning classifiers for an analysis of different species of leaves.Comment: 44 pages, 7 figures, "for final published version, see https://link.springer.com/article/10.1007/s11831-018-9266-3

    Embedding Visual Hierarchy with Deep Networks for Large-Scale Visual Recognition

    Full text link
    In this paper, a level-wise mixture model (LMM) is developed by embedding visual hierarchy with deep networks to support large-scale visual recognition (i.e., recognizing thousands or even tens of thousands of object classes), and a Bayesian approach is used to adapt a pre-trained visual hierarchy automatically to the improvements of deep features (that are used for image and object class representation) when more representative deep networks are learned along the time. Our LMM model can provide an end-to-end approach for jointly learning: (a) the deep networks to extract more discriminative deep features for image and object class representation; (b) the tree classifier for recognizing large numbers of object classes hierarchically; and (c) the visual hierarchy adaptation for achieving more accurate indexing of large numbers of object classes hierarchically. By supporting joint learning of the tree classifier, the deep networks and the visual hierarchy adaptation, our LMM algorithm can provide an effective approach for controlling inter-level error propagation effectively, thus it can achieve better accuracy rates on large-scale visual recognition. Our experiments are carried on ImageNet1K and ImageNet10K image sets, and our LMM algorithm can achieve very competitive results on both the accuracy rates and the computation efficiency as compared with the baseline methods

    MMF: Multi-Task Multi-Structure Fusion for Hierarchical Image Classification

    Full text link
    Hierarchical classification is significant for complex tasks by providing multi-granular predictions and encouraging better mistakes. As the label structure decides its performance, many existing approaches attempt to construct an excellent label structure for promoting the classification results. In this paper, we consider that different label structures provide a variety of prior knowledge for category recognition, thus fusing them is helpful to achieve better hierarchical classification results. Furthermore, we propose a multi-task multi-structure fusion model to integrate different label structures. It contains two kinds of branches: one is the traditional classification branch to classify the common subclasses, the other is responsible for identifying the heterogeneous superclasses defined by different label structures. Besides the effect of multiple label structures, we also explore the architecture of the deep model for better hierachical classification and adjust the hierarchical evaluation metrics for multiple label structures. Experimental results on CIFAR100 and Car196 show that our method obtains significantly better results than using a flat classifier or a hierarchical classifier with any single label structure.Comment: Accpeted by ICANN 202

    Hierarchical Classification of Scientific Taxonomies with Autonomous Underwater Vehicles

    Get PDF
    Autonomous Underwater Vehicles (AUVs) have catalysed a significant shift in the way marine habitats are studied. It is now possible to deploy an AUV from a ship, and capture tens of thousands of georeferenced images in a matter of hours. There is a growing body of research investigating ways to automatically apply semantic labels to this data, with two goals. The task of manually labelling a large number of images is time consuming and error prone. Further, there is the potential to change AUV surveys from being geographically defined (based on a pre-planned route), to permitting the AUV to adapt the mission plan in response to semantic observations. This thesis focusses on frameworks that permit a unified machine learning approach with applicability to a wide range of geographic areas, and diverse areas of interest for marine scientists. This can be addressed through the use of hierarchical classification; in which machine learning algorithms are trained to predict not just a binary or multi-class outcome, but a hierarchy of related output labels which are not mutually exclusive, such as a scientific taxonomy. In order to investigate classification on larger hierarchies with greater geographic diversity, the BENTHOZ-2015 data set was assembled as part of a collaboration between five Australian research groups. Existing labelled data was re-mapped to the CATAMI hierarchy, in total more than 400,000 point labels, conforming to a hierarchy of around 150 classes. The common hierarchical classification approach of building a network of binary classifiers was applied to the BENTHOZ-2015 data set, and a novel application of Bayesian Network theory and probability calibration was used as a theoretical foundation for the approach, resulting in improved classifier performance. This was extended to a more complex hidden node Bayesian Network structure, which permits inclusion of additional sensor modalities, and tuning for better performance in particular geographic regions

    Identification of Tree Species in Japanese Forests based on Aerial Photography and Deep Learning

    Full text link
    Natural forests are complex ecosystems whose tree species distribution and their ecosystem functions are still not well understood. Sustainable management of these forests is of high importance because of their significant role in climate regulation, biodiversity, soil erosion and disaster prevention among many other ecosystem services they provide. In Japan particularly, natural forests are mainly located in steep mountains, hence the use of aerial imagery in combination with computer vision are important modern tools that can be applied to forest research. Thus, this study constitutes a preliminary research in this field, aiming at classifying tree species in Japanese mixed forests using UAV images and deep learning in two different mixed forest types: a black pine (Pinus thunbergii)-black locust (Robinia pseudoacacia) and a larch (Larix kaempferi)-oak (Quercus mongolica) mixed forest. Our results indicate that it is possible to identify black locust trees with 62.6 % True Positives (TP) and 98.1% True Negatives (TN), while lower precision was reached for larch trees (37.4% TP and 97.7% TN).Comment: Proc. of EnviroInfo 2020, Nicosia, Cyprus, September 202

    Watershed Monitoring in Galicia from UAV Multispectral Imagery Using Advanced Texture Methods

    Get PDF
    Watershed management is the study of the relevant characteristics of a watershed aimed at the use and sustainable management of forests, land, and water. Watersheds can be threatened by deforestation, uncontrolled logging, changes in farming systems, overgrazing, road and track construction, pollution, and invasion of exotic plants. This article describes a procedure to automatically monitor the river basins of Galicia, Spain, using five-band multispectral images taken by an unmanned aerial vehicle and several image processing algorithms. The objective is to determine the state of the vegetation, especially the identification of areas occupied by invasive species, as well as the detection of man-made structures that occupy the river basin using multispectral images. Since the territory to be studied occupies extensive areas and the resulting images are large, techniques and algorithms have been selected for fast execution and efficient use of computational resources. These techniques include superpixel segmentation and the use of advanced texture methods. For each one of the stages of the method (segmentation, texture codebook generation, feature extraction, and classification), different algorithms have been evaluated in terms of speed and accuracy for the identification of vegetation and natural and artificial structures in the Galician riversides. The experimental results show that the proposed approach can achieve this goal with speed and precisionThis work was supported in part by the Civil Program UAVs Initiative, promoted by the Xunta de Galicia and developed in partnership with the Babcock company to promote the use of unmanned technologies in civil services. We also have to acknowledge the support by the Ministerio de Ciencia e Innovación, Government of Spain (grant number PID2019-104834GB-I00), and Consellería de Educación, Universidade e Formación Profesional (grant number ED431C 2018/19, and accreditation 2019–2022 ED431G-2019/04). All are co-funded by the European Regional Development Fund (ERDF)S

    Tree species classification from AVIRIS-NG hyperspectral imagery using convolutional neural networks

    Full text link
    This study focuses on the automatic classification of tree species using a three-dimensional convolutional neural network (CNN) based on field-sampled ground reference data, a LiDAR point cloud and AVIRIS-NG airborne hyperspectral remote sensing imagery with 2 m spatial resolution acquired on 14 June 2021. I created a tree species map for my 10.4 km2 study area which is located in the Jurapark Aargau, a Swiss regional park of national interest. I collected ground reference data for six major tree species present in the study area (Quercus robur, Fagus sylvatica, Fraxinus excelsior, Pinus sylvestris, Tilia platyphyllos, total n = 331). To match the sampled ground reference to the AVIRIS-NG 425 band hyperspectral imagery, I delineated individual tree crowns (ITCs) from a canopy height model (CHM) based on LiDAR point cloud data. After matching the ground reference data to the hyperspectral imagery, I split the extracted image patches to training, validation, and testing subsets. The amount of training, validation and testing data was increased by applying image augmentation through rotating, flipping, and changing the brightness of the original input data. The classifier is a CNN trained on the first 32 principal components (PC’s) extracted from AVIRIS-NG data. The CNN uses image patches of 5 × 5 pixels and consists of two convolutional layers and two fully connected layers. The latter of which is responsible for the final classification using the softmax activation function. The results show that the CNN classifier outperforms comparable conventional classification methods. The CNN model is able to predict the correct tree species with an overall accuracy of 70% and an average F1-score of 0.67. A random forest classifier reached an overall accuracy of 67% and an average F1-score of 0.61 while a support-vector machine classified the tree species with an overall accuracy of 66% and an average F1-score of 0.62. This work highlights that CNNs based on imaging spectroscopy data can produce highly accurate high resolution tree species distribution maps based on a relatively small set of training data thanks to the high dimensionality of hyperspectral images and the ability of CNNs to utilize spatial and spectral features of the data. These maps provide valuable input for modelling the distributions of other plant and animal species and ecosystem services. In addition, this work illustrates the importance of direct collaboration with environmental practitioners to ensure user needs are met. This aspect will be evaluated further in future work by assessing how these products are used by environmental practitioners and as input for modelling purposes

    Improving the accuracy of weed species detection for robotic weed control in complex real-time environments

    Get PDF
    Alex Olsen applied deep learning and machine vision to improve the accuracy of weed species detection in real time complex environments. His robotic weed control prototype, AutoWeed, presents a new efficient tool for weed management in crop and pasture and has launched a startup agricultural technology company

    Multisource and Multitemporal Data Fusion in Remote Sensing

    Full text link
    The sharp and recent increase in the availability of data captured by different sensors combined with their considerably heterogeneous natures poses a serious challenge for the effective and efficient processing of remotely sensed data. Such an increase in remote sensing and ancillary datasets, however, opens up the possibility of utilizing multimodal datasets in a joint manner to further improve the performance of the processing approaches with respect to the application at hand. Multisource data fusion has, therefore, received enormous attention from researchers worldwide for a wide variety of applications. Moreover, thanks to the revisit capability of several spaceborne sensors, the integration of the temporal information with the spatial and/or spectral/backscattering information of the remotely sensed data is possible and helps to move from a representation of 2D/3D data to 4D data structures, where the time variable adds new information as well as challenges for the information extraction algorithms. There are a huge number of research works dedicated to multisource and multitemporal data fusion, but the methods for the fusion of different modalities have expanded in different paths according to each research community. This paper brings together the advances of multisource and multitemporal data fusion approaches with respect to different research communities and provides a thorough and discipline-specific starting point for researchers at different levels (i.e., students, researchers, and senior researchers) willing to conduct novel investigations on this challenging topic by supplying sufficient detail and references
    • …
    corecore