17 research outputs found

    Generating Synthetic Sidescan Sonar Snippets Using Transfer-Learning in Generative Adversarial Networks

    Get PDF
    The training of a deep learning model requires a large amount of data. In case of sidescan sonar images, the number of snippets from objects of interest is limited. Generative adversarial networks (GAN) have shown to be able to generate photo-realistic images. Hence, we use a GAN to augment a baseline sidescan image dataset with synthetic snippets. Although the training of a GAN with few data samples is likely to cause mode collapse, a combination of pre-training using simple simulated images and fine-tuning with real data reduces this problem. However, for sonar data, we show that this approach of transfer-learning a GAN is sensitive to the pre-training step, meaning that the vanishing of the gradients of the GAN's discriminator becomes a critical problem. Here, we demonstrate how to overcome this problem, and thus how to apply transfer-learning to GANs for generating synthetic sidescan snippets in a more robust way. Additionally, in order to further investigate the GAN's ability to augment a sidescan image dataset, the generated images are analyzed in the image and the frequency domain. The work helps other researchers in the field of sonar image processing to augment their dataset with additional synthetic samples

    Survey on deep learning based computer vision for sonar imagery

    Get PDF
    Research on the automatic analysis of sonar images has focused on classical, i.e. non deep learning based, approaches for a long time. Over the past 15 years, however, the application of deep learning in this research field has constantly grown. This paper gives a broad overview of past and current research involving deep learning for feature extraction, classification, detection and segmentation of sidescan and synthetic aperture sonar imagery. Most research in this field has been directed towards the investigation of convolutional neural networks (CNN) for feature extraction and classification tasks, with the result that even small CNNs with up to four layers outperform conventional methods. The purpose of this work is twofold. On one hand, due to the quick development of deep learning it serves as an introduction for researchers, either just starting their work in this specific field or working on classical methods for the past years, and helps them to learn about the recent achievements. On the other hand, our main goal is to guide further research in this field by identifying main research gaps to bridge. We propose to leverage the research in this field by combining available data into an open source dataset as well as carrying out comparative studies on developed deep learning methods.Article number 10515711

    Look ATME: The Discriminator Mean Entropy Needs Attention

    Full text link
    Generative adversarial networks (GANs) are successfully used for image synthesis but are known to face instability during training. In contrast, probabilistic diffusion models (DMs) are stable and generate high-quality images, at the cost of an expensive sampling procedure. In this paper, we introduce a simple method to allow GANs to stably converge to their theoretical optimum, while bringing in the denoising machinery from DMs. These models are combined into a simpler model (ATME) that only requires a forward pass during inference, making predictions cheaper and more accurate than DMs and popular GANs. ATME breaks an information asymmetry existing in most GAN models in which the discriminator has spatial knowledge of where the generator is failing. To restore the information symmetry, the generator is endowed with knowledge of the entropic state of the discriminator, which is leveraged to allow the adversarial game to converge towards equilibrium. We demonstrate the power of our method in several image-to-image translation tasks, showing superior performance than state-of-the-art methods at a lesser cost. Code is available at https://github.com/DLR-MI/atmeComment: Accepted for the CVPR 2023 Workshop on Generative Models for Computer Vision, https://generative-vision.github.io/workshop-CVPR-23

    Proceedings of the MARESEC 2022

    Get PDF
    The second European Workshop on Maritime Systems Resilience and Security (MARESEC) was dedicated to the research on Resilience, Security, Technology and related Ethical, Legal, and Social Aspects (ELSA) in the context of Maritime Systems, including but not restricted to (Offshore/Onshore) Infrastructures, Navigation and Shipping and Autonomous Systems. The event, which was organized by the Institute for the Protection of Maritime Infrastructures of the German Aerospace Center (DLR), occurred in a hybrid manner on June, 20th 2022. It counted on 79 participants online and onside at the Fischbahnhof, Bremerhaven, Germany. Out of all submitted extended abstracts, 24 submissions had been selected for presentation. Additionally, 2 works of undergraduate and graduate students have been presented (the final schedule can be found in the appendix). The authors are affiliated to institutions from Canada, Egypt, Finland, Germany, Greece, Norway, Poland, Switzerland, United Kingdom, United States

    Look ATME: The Discriminator Mean Entropy Needs Attention

    Get PDF
    Generative adversarial networks (GANs) are successfully used for image synthesis but are known to face instability during training. In contrast, probabilistic diffusion models (DMs) are stable and generate high-quality images, at the cost of an expensive sampling procedure. In this paper, we introduce a simple method to allow GANs to stably converge to their theoretical optimum, while bringing in the denoising machinery from DMs. These models are combined into a simpler model (ATME) that only requires a forward pass during inference, making predictions cheaper and more accurate than DMs and popular GANs. ATME breaks an information asymmetry existing in most GAN models in which the discriminator has spatial knowledge of where the generator is failing. To restore the information symmetry, the generator is endowed with knowledge of the entropic state of the discriminator, which is leveraged to allow the adversarial game to converge towards equilibrium. We demonstrate the power of our method in several image-to-image translation tasks, showing superior performance than state-of-the-art methods at a lesser cost. Code is available at https://github.com/DLR-MI/atme

    The 2nd Workshop on Maritime Computer Vision (MaCVi) 2024

    Full text link
    The 2nd Workshop on Maritime Computer Vision (MaCVi) 2024 addresses maritime computer vision for Unmanned Aerial Vehicles (UAV) and Unmanned Surface Vehicles (USV). Three challenges categories are considered: (i) UAV-based Maritime Object Tracking with Re-identification, (ii) USV-based Maritime Obstacle Segmentation and Detection, (iii) USV-based Maritime Boat Tracking. The USV-based Maritime Obstacle Segmentation and Detection features three sub-challenges, including a new embedded challenge addressing efficicent inference on real-world embedded devices. This report offers a comprehensive overview of the findings from the challenges. We provide both statistical and qualitative analyses, evaluating trends from over 195 submissions. All datasets, evaluation code, and the leaderboard are available to the public at https://macvi.org/workshop/macvi24.Comment: Part of 2nd Workshop on Maritime Computer Vision (MaCVi) 2024 IEEE Xplore submission as part of WACV 202

    Verbesserung der Klassifikationsperformance von Deep Learning Modellen durch Reduktion der Komplexität von Seitensichtsonarbildern

    No full text
    Mit einer zunehmend autonomen Untersuchung des Meeresbodens, z. B. durch autonome Unterwasserfahrzeuge, die mit bildgebenden Sonarsystemen ausgestattet sind, steigt auch die Anforderung an eine automatische Auswertung der gewonnenen Daten. Deep Learning Methoden haben sich in den letzten Jahren auch im Sonar Bereich für die Detektion und Klassifikation von Objekten als effizientes Werkzeug herausgestellt. Die dahinterstehenden Modelle müssen jedoch mit einer großen Menge an Daten trainiert werden. Die Aufnahme großer und variabler Sonarbilddatensätze ist jedoch mit einem erheblichen Aufwand verbunden. Somit ist es essenziell das Training von Deep Learning Modellen auch mit kleinen Datensätzen erfolgreich zu gestalten. In dieser Arbeit wird die Reduktion der Komplexität der Eingangsbilder sowie die explizierte Verwendung von manuell extrahierten Merkmalen der Bilder als geeignete Methode vorgestellt. Genauer gesagt, wird die Größeninformation verschiedener Objekte durch Skalierung der Sonarbilder entfernt. Zusätzlich wird diese Information einem Convolutional Neural Network (CNN) als separater Eingang zur Verfügung gestellt. Die so trainierten CNNs zeigen eine Verbesserung der Klassifikationsperformance um 13 Prozentpunkte gegenüber einem gewöhnlichen Training
    corecore