685 research outputs found
Self-Supervised Learning for Improved Synthetic Aperture Sonar Target Recognition
This study explores the application of self-supervised learning (SSL) for
improved target recognition in synthetic aperture sonar (SAS) imagery. The
unique challenges of underwater environments make traditional computer vision
techniques, which rely heavily on optical camera imagery, less effective. SAS,
with its ability to generate high-resolution imagery, emerges as a preferred
choice for underwater imaging. However, the voluminous high-resolution SAS data
presents a significant challenge for labeling; a crucial step for training deep
neural networks (DNNs).
SSL, which enables models to learn features in data without the need for
labels, is proposed as a potential solution to the data labeling challenge in
SAS. The study evaluates the performance of two prominent SSL algorithms,
MoCov2 and BYOL, against the well-regarded supervised learning model, ResNet18,
for binary image classification tasks. The findings suggest that while both SSL
models can outperform a fully supervised model with access to a small number of
labels in a few-shot scenario, they do not exceed it when all the labels are
used.
The results underscore the potential of SSL as a viable alternative to
traditional supervised learning, capable of maintaining task performance while
reducing the time and costs associated with data labeling. The study also
contributes to the growing body of evidence supporting the use of SSL in remote
sensing and could stimulate further research in this area
Self-supervised Learning for Sonar Image Classification
Self-supervised learning has proved to be a powerful approach to learn image
representations without the need of large labeled datasets. For underwater
robotics, it is of great interest to design computer vision algorithms to
improve perception capabilities such as sonar image classification. Due to the
confidential nature of sonar imaging and the difficulty to interpret sonar
images, it is challenging to create public large labeled sonar datasets to
train supervised learning algorithms. In this work, we investigate the
potential of three self-supervised learning methods (RotNet, Denoising
Autoencoders, and Jigsaw) to learn high-quality sonar image representation
without the need of human labels. We present pre-training and transfer learning
results on real-life sonar image datasets. Our results indicate that
self-supervised pre-training yields classification performance comparable to
supervised pre-training in a few-shot transfer learning setup across all three
methods. Code and self-supervised pre-trained models are be available at
https://github.com/agrija9/ssl-sonar-imagesComment: 8 pages, 10 figures, with supplementary. LatinX in CV Workshop @ CVPR
2022 Camera Read
Novel deep learning architectures for marine and aquaculture applications
Alzayat Saleh's research was in the area of artificial intelligence and machine learning to autonomously recognise fish and their morphological features from digital images. Here he created new deep learning architectures that solved various computer vision problems specific to the marine and aquaculture context. He found that these techniques can facilitate aquaculture management and environmental protection. Fisheries and conservation agencies can use his results for better monitoring strategies and sustainable fishing practices
Survey on deep learning based computer vision for sonar imagery
Research on the automatic analysis of sonar images has focused on classical, i.e. non deep learning based, approaches for a long time. Over the past 15 years, however, the application of deep learning in this research field has constantly grown. This paper gives a broad overview of past and current research involving deep learning for feature extraction, classification, detection and segmentation of sidescan and synthetic aperture sonar imagery. Most research in this field has been directed towards the investigation of convolutional neural networks (CNN) for feature extraction and classification tasks, with the result that even small CNNs with up to four layers outperform conventional methods. The purpose of this work is twofold. On one hand, due to the quick development of deep learning it serves as an introduction for researchers, either just starting their work in this specific field or working on classical methods for the past years, and helps them to learn about the recent achievements. On the other hand, our main goal is to guide further research in this field by identifying main research gaps to bridge. We propose to leverage the research in this field by combining available data into an open source dataset as well as carrying out comparative studies on developed deep learning methods.Article number 10515711
Deep Learning Approaches for Seagrass Detection in Multispectral Imagery
Seagrass forms the basis for critically important marine ecosystems. Seagrass is an important factor to balance marine ecological systems, and it is of great interest to monitor its distribution in different parts of the world. Remote sensing imagery is considered as an effective data modality based on which seagrass monitoring and quantification can be performed remotely. Traditionally, researchers utilized multispectral satellite images to map seagrass manually. Automatic machine learning techniques, especially deep learning algorithms, recently achieved state-of-the-art performances in many computer vision applications. This dissertation presents a set of deep learning models for seagrass detection in multispectral satellite images. It also introduces novel domain adaptation approaches to adapt the models for new locations and for temporal image series. In Chapter 3, I compare a deep capsule network (DCN) with a deep convolutional neural network (DCNN) for seagrass detection in high-resolution multispectral satellite images. These methods are tested on three satellite images in Florida coastal areas and obtain comparable performances. In addition, I also propose a few-shot deep learning strategy to transfer knowledge learned by DCN from one location to the others for seagrass detection. In Chapter 4, I develop a semi-supervised domain adaptation method to generalize a trained DCNN model to multiple locations for seagrass detection. First, the model utilizes a generative adversarial network (GAN) to align marginal distribution of data in the source domain to that in the target domain using unlabeled data from both domains. Second, it uses a few labeled samples from the target domain to align class-specific data distributions between the two. The model achieves the best results in 28 out of 36 scenarios as compared to other state-of-the-art domain adaptation methods. In Chapter 5, I develop a semantic segmentation method for seagrass detection in multispectral time-series images. First, I train a state-of-the-art image segmentation method using an active learning approach where I use the DCNN classifier in the loop. Then, I develop an unsupervised domain adaptation (UDA) algorithm to detect seagrass across temporal images. I also extend our unsupervised domain adaptation work for seagrass detection across locations. In Chapter 6, I present an automated bathymetry estimation model based on multispectral satellite images. Bathymetry refers to the depth of the ocean floor and contributes a predominant role in identifying marine species in seawater. Accurate bathymetry information of coastal areas will facilitate seagrass detection by reducing false positives because seagrass usually do not grow beyond a certain depth. However, bathymetry information of most parts of the world is obsolete or missing. Traditional bathymetry measurement systems require extensive labor efforts. I utilize an ensemble machine learning-based approach to estimate bathymetry based on a few in-situ sonar measurements and evaluate the proposed model in three coastal locations in Florida
Sea-Surface Object Detection Based on Electro-Optical Sensors: A Review
Sea-surface object detection is critical for navigation safety of autonomous ships. Electrooptical (EO) sensors, such as video cameras, complement radar on board in detecting small obstacle
sea-surface objects. Traditionally, researchers have used horizon detection, background subtraction, and
foreground segmentation techniques to detect sea-surface objects. Recently, deep learning-based object
detection technologies have been gradually applied to sea-surface object detection. This article demonstrates a comprehensive overview of sea-surface object-detection approaches where the advantages
and drawbacks of each technique are compared, covering four essential aspects: EO sensors and image
types, traditional object-detection methods, deep learning methods, and maritime datasets collection. In
particular, sea-surface object detections based on deep learning methods are thoroughly analyzed and
compared with highly influential public datasets introduced as benchmarks to verify the effectiveness of
these approaches. The arti
- …