143 research outputs found

    BIIGLE 2.0 - Browsing and Annotating Large Marine Image Collections

    Get PDF
    Combining state-of-the art digital imaging technology with different kinds of marine exploration techniques such as modern AUV (autonomous underwater vehicle), ROV (remote operating vehicle) or other monitoring platforms enables marine imaging on new spatial and/or temporal scales. A comprehensive interpretation of such image collections requires the detection, classification and quantification of objects of interest in the images usually performed by domain experts. However, the data volume and the rich content of the images makes the support by software tools inevitable. We define some requirements for marine image annotation and present our new online tool Biigle 2.0. It is developed with a special focus on annotating benthic fauna in marine image collections with tools customized to increase efficiency and effectiveness in the manual annotation process. The software architecture of the system is described and the special features of Biigle 2.0 are illustrated with different use-cases and future developments are discussed

    Quantification of the fine-scale distribution of Mn-nodules: insights from AUV multi-beam and optical imagery data fusion

    Get PDF
    Autonomous underwater vehicles (AUVs) offer unique possibilities for exploring the deep seafloor in high resolution over large areas. We highlight the results from AUV-based multibeam echosounder (MBES) bathymetry / backscatter and digital optical imagery from the DISCOL area acquired during research cruise SO242 in 2015. AUV bathymetry reveals a morphologically complex seafloor with rough terrain in seamount areas and low-relief variations in sedimentary abyssal plains which are covered in Mn-nodules. Backscatter provides valuable information about the seafloor type and particularly about the influence of Mn-nodules on the response of the transmitted acoustic signal. Primarily, Mn-nodule abundances were determined by means of automated nodule detection on AUV seafloor imagery and nodule metrics such as nodules m−2 were calculated automatically for each image allowing further spatial analysis within GIS in conjunction with the acoustic data. AUV-based backscatter was clustered using both raw data and corrected backscatter mosaics. In total, two unsupervised methods and one machine learning approach were utilized for backscatter classification and Mn-nodule predictive mapping. Bayesian statistical analysis was applied to the raw backscatter values resulting in six acoustic classes. In addition, Iterative Self-Organizing Data Analysis (ISODATA) clustering was applied to the backscatter mosaic and its statistics (mean, mode, 10th, and 90th quantiles) suggesting an optimum of six clusters as well. Part of the nodule metrics data was combined with bathymetry, bathymetric derivatives and backscatter statistics for predictive mapping of the Mn-nodule density using a Random Forest classifier. Results indicate that acoustic classes, predictions from Random Forest model and image-based nodule metrics show very similar spatial distribution patterns with acoustic classes hence capturing most of the fine-scale Mn-nodule variability. Backscatter classes reflect areas with homogeneous nodule density. A strong influence of mean backscatter, fine scale BPI and concavity of the bathymetry on nodule prediction is seen. These observations imply that nodule densities are generally affected by local micro-bathymetry in a way that is not yet fully understood. However, it can be concluded that the spatial occurrence of Mn-covered areas can be sufficiently analysed by means of acoustic classification and multivariate predictive mapping allowing to determine the spatial nodule density in a much more robust way than previously possible

    An acquisition, curation and management workflow for sustainable, terabyte-scale marine image analysis

    Get PDF
    Optical imaging is a common technique in ocean research. Diving robots, towed cameras, drop-cameras and TV-guided sampling gear: all produce image data of the underwater environment. Technological advances like 4K cameras, autonomous robots, high-capacity batteries and LED lighting now allow systematic optical monitoring at large spatial scale and shorter time but with increased data volume and velocity. Volume and velocity are further increased by growing fleets and emerging swarms of autonomous vehicles creating big data sets in parallel. This generates a need for automated data processing to harvest maximum information. Systematic data analysis benefits from calibrated, geo-referenced data with clear metadata description, particularly for machine vision and machine learning. Hence, the expensive data acquisition must be documented, data should be curated as soon as possible, backed up and made publicly available. Here, we present a workflow towards sustainable marine image analysis. We describe guidelines for data acquisition, curation and management and apply it to the use case of a multi-terabyte deep-sea data set acquired by an autonomous underwater vehicle

    RecoMIA - Recommendations for Marine Image Annotation: Lessons Learned and Future Directions

    Get PDF
    Marine imaging is transforming into a sensor technology applied for high throughput sampling. In the context of habitat mapping, imaging establishes thereby an important bridge technology regarding the spatial resolution and information content between physical sampling gear (e.g., box corer, multi corer) on the one end and hydro-acoustic sensors on the other end of the spectrum of sampling methods. In contrast to other scientific imaging domains, such as digital pathology, there are no protocols and reports available that guide users (often referred to as observers) in the non-trivial process of assigning semantic categories to whole images, regions, or objects of interest (OOI), which is referred to as annotation. These protocols are crucial to facilitate image analysis as a robust scientific method. In this article we will review the past observations in manual Marine Image Annotations (MIA) and provide (a) a guideline for collecting manual annotations, (b) definitions for annotation quality, and (c) a statistical framework to analyze the performance of human expert annotations and to compare those to computational approaches

    Compact-morphology-based poly-metallic nodule delineation

    Get PDF
    Poly-metallic nodules are a marine resource considered for deep sea mining. Assessing nodule abundance is of interest for mining companies and to monitor potential environmental impact. Optical seafloor imaging allows quantifying poly-metallic nodule abundance at spatial scales from centimetres to square kilometres. Towed cameras and diving robots acquire high-resolution imagery that allow detecting individual nodules and measure their sizes. Spatial abundance statistics can be computed from these size measurements, providing e.g. seafloor coverage in percent and the nodule size distribution. Detecting nodules requires segmentation of nodule pixels from pixels showing sediment background. Semi-supervised pattern recognition has been proposed to automate this task. Existing nodule segmentation algorithms employ machine learning that trains a classifier to segment the nodules in a high-dimensional feature space. Here, a rapid nodule segmentation algorithm is presented. It omits computation-intense feature-based classification and employs image processing only. It exploits a nodule compactness heuristic to delineate individual nodules. Complex machine learning methods are avoided to keep the algorithm simple and fast. The algorithm has successfully been applied to different image datasets. These data sets were acquired by different cameras, camera platforms and in varying illumination conditions. Their successful analysis shows the broad applicability of the proposed method

    Automated Activity Estimation of the Cold-Water Coral Lophelia pertusa by Multispectral Imaging and Computational Pixel Classification

    Get PDF
    The cold-water coral Lophelia pertusa builds up bioherms that sustain high biodiversity in the deep ocean worldwide. Photographic monitoring of the polyp activity represents a helpful tool to characterize the health status of the corals and to assess anthropogenic impacts on the microhabitat. Discriminating active polyps from skeletons of white Lophelia pertusa is usually time-consuming and error-prone due to their similarity in color in common RGB camera footage. Acquisition of finer resolved spectral information might increase the contrast between the segments of polyps and skeletons, and therefore could support automated classification and accurate activity estimation of polyps. For recording the needed footage, underwater multispectral imaging systems can be used, but they are often expensive and bulky. Here we present results of a new, light-weight, compact and low-cost deep-sea tunable LED-based underwater multispectral imaging system (TuLUMIS) with eight spectral channels. A brunch of healthy white Lophelia pertusa was observed under controlled conditions in a laboratory tank. Spectral reflectance signatures were extracted from pixels of polyps and skeletons of the observed coral. Results showed that the polyps can be better distinguished from the skeleton by analysis of the eight-dimensional spectral reflectance signatures compared to three-channel RGB data. During a 72-hour monitoring of the coral with a half-hour temporal resolution in the lab, the polyp activity was estimated based on the results of the multispectral pixel classification using a support vector machine (SVM) approach. The computational estimated polyp activity was consistent with that of the manual annotation, which yielded a correlation coefficient of 0.957

    Integrated Digital Marine Image Analysis and Management - new solutions to handle large image collections in environmental monitoring and exploration

    Get PDF
    Steinbrink B, Schoening T, Brün D, Nattkemper TW. Integrated Digital Marine Image Analysis and Management - new solutions to handle large image collections in environmental monitoring and exploration. Presented at the GEOHAB, Salvador, Brazil

    DELPHI - fast and adaptive computational laser point detection and visual footprint quantification for arbitrary underwater image collections

    Get PDF
    Schoening T, Kuhn T, Bergmann M, Nattkemper TW. DELPHI - fast and adaptive computational laser point detection and visual footprint quantification for arbitrary underwater image collections. Frontiers in Marine Science. 2015;2: 20.Marine researchers continue to create large quantities of benthic images e.g. using AUVs (Autonomous Underwater Vehicles). In order to quantify the size of sessile objects in the images, a pixel-to-centimetre ratio is required for each image, often indirectly provided through a geometric laser point (LP) pattern, projected onto the seafloor. Manual annotation of these LPs in all images is too time-consuming and thus infeasible for nowadays data volumes. Because of the technical evolution of camera rigs, the LP's geometrical layout and colour features vary for different expeditions and projects. This makes the application of one algorithm, tuned to a strictly defined LP pattern, also ineffective. Here we present the web-tool DELPHI, that efficiently learns the LP layout for one image transect / collection from just a small number of hand labelled LPs and applies this layout model to the rest of the data. The efficiency in adapting to new data allows to compute the LPs and the pixel-to-centimetre ratio fully automatic and with high accuracy. DELPHI is applied to two real-world examples and shows clear improvements regarding reduction of tuning effort for new LP patterns as well as increasing detection performance
    corecore