205 research outputs found

    Data augmentation by combining feature selection and color features for image classification

    Get PDF
    Image classification is an essential task in computer vision with various applications such as bio-medicine, industrial inspection. In some specific cases, a huge training data is required to have a better model. However, it is true that full label data is costly to obtain. Many basic pre-processing methods are applied for generating new images by translation, rotation, flipping, cropping, and adding noise. This could lead to degrade the performance. In this paper, we propose a method for data augmentation based on color features information combining with feature selection. This combination allows improving the classification accuracy. The proposed approach is evaluated on several texture datasets by using local binary patterns features

    Impact of Colour Variation on Robustness of Deep Neural Networks

    Full text link
    Deep neural networks (DNNs) have have shown state-of-the-art performance for computer vision applications like image classification, segmentation and object detection. Whereas recent advances have shown their vulnerability to manual digital perturbations in the input data, namely adversarial attacks. The accuracy of the networks is significantly affected by the data distribution of their training dataset. Distortions or perturbations on color space of input images generates out-of-distribution data, which make networks more likely to misclassify them. In this work, we propose a color-variation dataset by distorting their RGB color on a subset of the ImageNet with 27 different combinations. The aim of our work is to study the impact of color variation on the performance of DNNs. We perform experiments on several state-of-the-art DNN architectures on the proposed dataset, and the result shows a significant correlation between color variation and loss of accuracy. Furthermore, based on the ResNet50 architecture, we demonstrate some experiments of the performance of recently proposed robust training techniques and strategies, such as Augmix, revisit, and free normalizer, on our proposed dataset. Experimental results indicate that these robust training techniques can improve the robustness of deep networks to color variation.Comment: arXiv admin note: substantial text overlap with arXiv:2209.0213

    Crown-of-Thorns Starfish Detection by state-of-the-art YOLOv5

    Get PDF
    Crown-of-Thorns Starfish outbreaks appeared many decades ago which have threatened the overall health of the coral reefs in Australia’s Great Barrier Reef. This indeed has a direct impact on the reef-associated marine organisms and severely damages the biological diversity and resilience of the habitat structure. Yet, COTS surveillance has been carried out for long but completely by human effort, which is absolutely ineffective and prone to errors. There emerges an urge to apply recent advanced technology to deploy unmanned underwater vehicles for detecting the target object and taking suitable actions accordingly. Existing challenges include but not limited to the scarcity of qualified underwater images as well as superior detection algorithms which is able to satisfy major criteria such as light-weight, high accuracy and speedy detection. There are not many papers in this specific area of research and they can’t fulfill these expectations completely. In this thesis, we propose a deep learning based model to automatically detect the COTS in order to prevent the outbreak and minimize coral mortality in the Reef. As such, we use CSIRO COTS Dataset of underwater images from the Swain Reefs region to train our model. Our goal is to recognize as many starfish as possible while keeping the accuracy high enough to ensure the reliability of the solution. We provide a comprehensive background of the problem, and an intensive literature review in this area of research. In addition, to better align with our task, we use F2 score as the main evaluation metrics in our MS COCO- based evaluation scheme. That is, an average F2 is computed from the results obtained at different IoU thresholds, from 0.3 to 0.8 with a step size of 0.05. In our implementation, we experiment with model architecture selection, online image augmentation, confidence score threshold calibration and hyperparameter tuning to improve the testing performance in the model inference stage. Eventually, we present our novel COTS detector as a promising solution for the stated challenge

    Automating the Boring Stuff: A Deep Learning and Computer Vision Workflow for Coral Reef Habitat Mapping

    Get PDF
    High-resolution underwater imagery provides a detailed view of coral reefs and facilitates insight into important ecological metrics concerning their health. In recent years, anthropogenic stressors, including those related to climate change, have altered the community composition of coral reef habitats around the world. Currently the most common method of quantifying the composition of these communities is through benthic quadrat surveys and image analysis. This requires manual annotation of images that is a time-consuming task that does not scale well for large studies. Patch-based image classification using Convolutional Neural Networks (CNNs) can automate this task and provide sparse labels, but they remain computationally inefficient. This work extended the idea of automatic image annotation by using Fully Convolutional Networks (FCNs) to provide dense labels through semantic segmentation. Presented here is an improved version of Multilevel Superpixel Segmentation (MSS), an existing algorithm that repurposes the sparse labels provided to an image by automatically converting them into the dense labels necessary for training a FCN. This improved implementation—Fast-MSS—is demonstrated to perform considerably faster than the original without sacrificing accuracy. To showcase the applicability to benthic ecologists, this algorithm was independently validated by converting the sparse labels provided with the Moorea Labeled Coral (MLC) dataset into dense labels using Fast-MSS. FCNs were then trained and evaluated by comparing their predictions on the test images with the corresponding ground-truth sparse labels, setting the baseline scores for the task of semantic segmentation. Lastly, this study outlined a workflow using the methods previously described in combination with Structure-from-Motion (SfM) photogrammetry to classify the individual elements that make up a 3-D reconstructed model to their respective semantic groups. The contributions of this thesis help move the field of benthic ecology towards more efficient monitoring of coral reefs through entirely automated processes by making it easier to compute the changes in community composition using 2-D benthic habitat images and 3-D models

    Deep learning for Plankton and Coral Classification

    Get PDF
    Oceans are the essential lifeblood of the Earth: they provide over 70% of the oxygen and over 97% of the water. Plankton and corals are two of the most fundamental components of ocean ecosystems, the former due to their function at many levels of the oceans food chain, the latter because they provide spawning and nursery grounds to many fish populations. Studying and monitoring plankton distribution and coral reefs is vital for environment protection. In the last years there has been a massive proliferation of digital imagery for the monitoring of underwater ecosystems and much research is concentrated on the automated recognition of plankton and corals. In this paper, we present a study about an automated system for monitoring of underwater ecosystems. The system here proposed is based on the fusion of different deep learning methods. We study how to create an ensemble based of different CNN models, fine tuned on several datasets with the aim of exploiting their diversity. The aim of our study is to experiment the possibility of fine-tuning pretrained CNN for underwater imagery analysis, the opportunity of using different datasets for pretraining models, the possibility to design an ensemble using the same architecture with small variations in the training procedure. The experimental results are very encouraging, our experiments performed on 5 well-knowns datasets (3 plankton and 2 coral datasets) show that the fusion of such different CNN models in a heterogeneous ensemble grants a substantial performance improvement with respect to other state-of-the-art approaches in all the tested problems. One of the main contributions of this work is a wide experimental evaluation of famous CNN architectures to report performance of both single CNN and ensemble of CNNs in different problems. Moreover, we show how to create an ensemble which improves the performance of the best single model

    A survey on automatic habitat mapping

    Get PDF
    Habitat mapping can help assess the health of an ecosystem but the task is not always straightforward as, depending on the environment to be mapped, data types can be very different, such as marine and land habitats where in one case you can use sonar images and in the other satellite pictures. In this survey we explore works that used machine learning models when performing habitat mapping.Peer Reviewe

    A Structural Based Feature Extraction for Detecting the Relation of Hidden Substructures in Coral Reef Images

    Get PDF
    In this paper, we present an efficient approach to extract local structural color texture features for classifying coral reef images. Two local texture descriptors are derived from this approach. The first one, based on Median Robust Extended Local Binary Pattern (MRELBP), is called Color MRELBP (CMRELBP). CMRELBP is very accurate and can capture the structural information from color texture images. To reduce the dimensionality of the feature vector, the second descriptor, co-occurrence CMRELBP (CCMRELBP) is introduced. It is constructed by applying the Integrative Co-occurrence Matrix (ICM) on the Color MRELBP images. This way we can detect and extract the relative relations between structural texture patterns. Moreover, we propose a multiscale LBP based approach with these two schemes to capture microstructure and macrostructure texture information. The experimental results on coral reef (EILAT, EILAT2, RSMAS, and MLC) and four well-known texture datasets (OUTEX, KTH-TIPS, CURET, and UIUCTEX) show that the proposed scheme is quite effective in designing an accurate, robust to noise, rotation and illumination invariant texture classification system. Moreover, it makes an admissible tradeoff between accuracy and number of features

    A Comprehensive Survey of Deep Learning in Remote Sensing: Theories, Tools and Challenges for the Community

    Full text link
    In recent years, deep learning (DL), a re-branding of neural networks (NNs), has risen to the top in numerous areas, namely computer vision (CV), speech recognition, natural language processing, etc. Whereas remote sensing (RS) possesses a number of unique challenges, primarily related to sensors and applications, inevitably RS draws from many of the same theories as CV; e.g., statistics, fusion, and machine learning, to name a few. This means that the RS community should be aware of, if not at the leading edge of, of advancements like DL. Herein, we provide the most comprehensive survey of state-of-the-art RS DL research. We also review recent new developments in the DL field that can be used in DL for RS. Namely, we focus on theories, tools and challenges for the RS community. Specifically, we focus on unsolved challenges and opportunities as it relates to (i) inadequate data sets, (ii) human-understandable solutions for modelling physical phenomena, (iii) Big Data, (iv) non-traditional heterogeneous data sources, (v) DL architectures and learning algorithms for spectral, spatial and temporal data, (vi) transfer learning, (vii) an improved theoretical understanding of DL systems, (viii) high barriers to entry, and (ix) training and optimizing the DL.Comment: 64 pages, 411 references. To appear in Journal of Applied Remote Sensin
    • …
    corecore