165 research outputs found

    SHiPCC - A Sea-going High-Performance Compute Cluster for Image Analysis

    Get PDF
    Marine image analysis faces a multitude of challenges: data set size easily reaches Terabyte-scale; the underwater visual signal is often impaired to the point where information content becomes negligible; human interpreters are scarce and can only focus on subsets of the available data due to the annotation effort involved etc. Solutions to speed-up the analysis process have been presented in the literature in the form of semi-automation with artificial intelligence methods like machine learning. But the algorithms employed to automate the analysis commonly rely on large-scale compute infrastructure. So far, such an infrastructure has only been available on-shore. Here, a mobile compute cluster is presented to bring big image data analysis capabilities out to sea. The Sea-going High-Performance Compute Cluster (SHiPCC) units are mobile, robustly designed to operate with electrically impure ship-based power supplies and based on off-the-shelf computer hardware. Each unit comprises of up to eight compute nodes with graphics processing units for efficient image analysis and an internal storage to manage the big image data sets. The first SHiPCC unit has been successfully deployed at sea. It allowed us to extract semantic and quantitative information from a Terabyte-sized image data set within 1.5 h (a relative speedup of 97 compared to a single four-core CPU computer). Enabling such compute capability out at sea allows to include image-derived information into the cruise research plan, for example by determining promising sampling locations. The SHiPCC units are envisioned to generally improve the relevance and importance of optical imagery for marine science

    Automated detection in benthic images for megafauna classification and marine resource exploration: supervised and unsupervised methods for classification and regression tasks in benthic images with efficient integration of expert knowledge

    Get PDF
    Schoening T. Automated detection in benthic images for megafauna classification and marine resource exploration: supervised and unsupervised methods for classification and regression tasks in benthic images with efficient integration of expert knowledge. Bielefeld: Universitätsbibliothek Bielefeld; 2015.Image acquisition of deep sea floors allows to cast a glance on an extraordinary environment. Exploring the rarely known geology and biology of the deep sea regularly questions the scientific understanding of occurring conditions, processes and changes. Increasing sampling efforts, by both more frequent image acquisition as well as widespread monitoring of large areas, currently refine the scientific models about this environment. Accompanied by the sampling efforts, novel challenges emerge for the image based marine research. These include growing data volume, growing data variety and increased velocity at which data is acquired. Apart from the included technical challenges, the fundamental problem is to add semantics to the acquired data to extract further meaning and gain derived knowledge. Manual analysis of the data in terms of manually annotating images (e.g. annotating occurring species to gain species interaction knowledge) is an intricate task and has become infeasible due to the huge data volumes. The combination of data and interpretation challenges calls for automated approaches based on pattern recognition and especially computer vision methods. These methods have been applied in other fields to add meaning to visual data but have rarely been applied to the peculiar case of marine imaging. First of all, the physical factors of the environment constitute a unique computer vision challenge and require special attention in adapting the methods. Second, the impossibility to create a reliable reference gold standard from multiple field expert annotations challenges the development and evaluation of automated, pattern recognition based approaches. In this thesis, novel automated methods to add semantics to benthic images are presented that are based on common pattern recognition techniques. Three major benthic computer vision scenarios are addressed: the detection of laser points for scale quantification, the detection and classification of benthic megafauna for habitat composition assessments and the detection and quantity estimation of benthic mineral resources for deep sea mining. All approaches to address these scenarios are fitted to the peculiarities of the marine environment. The primary paradigm, that guided the development of all methods, was to design systems that can be operated by field experts without knowledge about the applied pattern recognition methods. Therefore, the systems have to be generally applicable to arbitrary image based detection scenarios. This in turn makes them applicable in other computer vision fields outside the marine environment as well. By tuning system parameters automatically from field expert annotations and applying methods that cope with errors in those annotations, the limitations of inaccurate gold standards can be bypassed. This allows to use the developed systems to further refine the scientific models based on automated image analysis

    Is functional status better in U.S. patients with cardiac disease than in their Canadian counterparts?

    Get PDF
    The raw images (available on request) have been captured using a Canon 8-15mm fisheye lens and therefore they have a wide field of view, which results in a dark image boundary as the lights did not illuminate the outer sectors well. The images in this dataset have then been undistorted to virtual images that an ideal perspective camera with only 90 degrees horizontal field of view would have seen from the same position. To achieve this, the color of each pixel in the ideal image is obtained by - computing the ray in space associated with this virtual pixel (using rectilinear un-projection) - projecting this ray into the original fisheye image (using equidistant projection), yielding a sub-pixel position - interpolating the colors of the neighboring pixels Technically, the undistortion has been performed using the tool https://svn.geomar.de/dsm-general/trunk/src/BIAS/Tools/biasproject.cpp (at revision 418, and earlier, compatible revisions). Manual image annotation is available here: https://annotate.geomar.de/volumes/24

    BIIGLE 2.0 - Browsing and Annotating Large Marine Image Collections

    Get PDF
    Combining state-of-the art digital imaging technology with different kinds of marine exploration techniques such as modern AUV (autonomous underwater vehicle), ROV (remote operating vehicle) or other monitoring platforms enables marine imaging on new spatial and/or temporal scales. A comprehensive interpretation of such image collections requires the detection, classification and quantification of objects of interest in the images usually performed by domain experts. However, the data volume and the rich content of the images makes the support by software tools inevitable. We define some requirements for marine image annotation and present our new online tool Biigle 2.0. It is developed with a special focus on annotating benthic fauna in marine image collections with tools customized to increase efficiency and effectiveness in the manual annotation process. The software architecture of the system is described and the special features of Biigle 2.0 are illustrated with different use-cases and future developments are discussed

    DELPHI - fast and adaptive computational laser point detection and visual footprint quantification for arbitrary underwater image collections

    Get PDF
    Marine researchers continue to create large quantities of benthic images e.g., using AUVs (Autonomous Underwater Vehicles). In order to quantify the size of sessile objects in the images, a pixel-to-centimeter ratio is required for each image, often indirectly provided through a geometric laser point (LP) pattern, projected onto the seafloor. Manual annotation of these LPs in all images is too time-consuming and thus infeasible for nowadays data volumes. Because of the technical evolution of camera rigs, the LP's geometrical layout and color features vary for different expeditions and projects. This makes the application of one algorithm, tuned to a strictly defined LP pattern, also ineffective. Here we present the web-tool DELPHI, that efficiently learns the LP layout for one image transect/collection from just a small number of hand labeled LPs and applies this layout model to the rest of the data. The efficiency in adapting to new data allows to compute the LPs and the pixel-to-centimeter ratio fully automatic and with high accuracy. DELPHI is applied to two real-world examples and shows clear improvements regarding reduction of tuning effort for new LP patterns as well as increasing detection performance

    Quantification of the fine-scale distribution of Mn-nodules: insights from AUV multi-beam and optical imagery data fusion

    Get PDF
    Autonomous underwater vehicles (AUVs) offer unique possibilities for exploring the deep seafloor in high resolution over large areas. We highlight the results from AUV-based multibeam echosounder (MBES) bathymetry / backscatter and digital optical imagery from the DISCOL area acquired during research cruise SO242 in 2015. AUV bathymetry reveals a morphologically complex seafloor with rough terrain in seamount areas and low-relief variations in sedimentary abyssal plains which are covered in Mn-nodules. Backscatter provides valuable information about the seafloor type and particularly about the influence of Mn-nodules on the response of the transmitted acoustic signal. Primarily, Mn-nodule abundances were determined by means of automated nodule detection on AUV seafloor imagery and nodule metrics such as nodules m−2 were calculated automatically for each image allowing further spatial analysis within GIS in conjunction with the acoustic data. AUV-based backscatter was clustered using both raw data and corrected backscatter mosaics. In total, two unsupervised methods and one machine learning approach were utilized for backscatter classification and Mn-nodule predictive mapping. Bayesian statistical analysis was applied to the raw backscatter values resulting in six acoustic classes. In addition, Iterative Self-Organizing Data Analysis (ISODATA) clustering was applied to the backscatter mosaic and its statistics (mean, mode, 10th, and 90th quantiles) suggesting an optimum of six clusters as well. Part of the nodule metrics data was combined with bathymetry, bathymetric derivatives and backscatter statistics for predictive mapping of the Mn-nodule density using a Random Forest classifier. Results indicate that acoustic classes, predictions from Random Forest model and image-based nodule metrics show very similar spatial distribution patterns with acoustic classes hence capturing most of the fine-scale Mn-nodule variability. Backscatter classes reflect areas with homogeneous nodule density. A strong influence of mean backscatter, fine scale BPI and concavity of the bathymetry on nodule prediction is seen. These observations imply that nodule densities are generally affected by local micro-bathymetry in a way that is not yet fully understood. However, it can be concluded that the spatial occurrence of Mn-covered areas can be sufficiently analysed by means of acoustic classification and multivariate predictive mapping allowing to determine the spatial nodule density in a much more robust way than previously possible

    An acquisition, curation and management workflow for sustainable, terabyte-scale marine image analysis

    Get PDF
    Optical imaging is a common technique in ocean research. Diving robots, towed cameras, drop-cameras and TV-guided sampling gear: all produce image data of the underwater environment. Technological advances like 4K cameras, autonomous robots, high-capacity batteries and LED lighting now allow systematic optical monitoring at large spatial scale and shorter time but with increased data volume and velocity. Volume and velocity are further increased by growing fleets and emerging swarms of autonomous vehicles creating big data sets in parallel. This generates a need for automated data processing to harvest maximum information. Systematic data analysis benefits from calibrated, geo-referenced data with clear metadata description, particularly for machine vision and machine learning. Hence, the expensive data acquisition must be documented, data should be curated as soon as possible, backed up and made publicly available. Here, we present a workflow towards sustainable marine image analysis. We describe guidelines for data acquisition, curation and management and apply it to the use case of a multi-terabyte deep-sea data set acquired by an autonomous underwater vehicle

    RecoMIA - Recommendations for Marine Image Annotation: Lessons Learned and Future Directions

    Get PDF
    Marine imaging is transforming into a sensor technology applied for high throughput sampling. In the context of habitat mapping, imaging establishes thereby an important bridge technology regarding the spatial resolution and information content between physical sampling gear (e.g., box corer, multi corer) on the one end and hydro-acoustic sensors on the other end of the spectrum of sampling methods. In contrast to other scientific imaging domains, such as digital pathology, there are no protocols and reports available that guide users (often referred to as observers) in the non-trivial process of assigning semantic categories to whole images, regions, or objects of interest (OOI), which is referred to as annotation. These protocols are crucial to facilitate image analysis as a robust scientific method. In this article we will review the past observations in manual Marine Image Annotations (MIA) and provide (a) a guideline for collecting manual annotations, (b) definitions for annotation quality, and (c) a statistical framework to analyze the performance of human expert annotations and to compare those to computational approaches

    RecoMIA - Recommendations for marine image annotation: Lessons learned and future directions

    Get PDF
    Schoening T, Osterloff J, Nattkemper TW. RecoMIA - Recommendations for marine image annotation: Lessons learned and future directions. Frontiers in Marine Science. 2016;3: 59.Marine imaging is transforming into a sensor technology applied for high throughput sampling. In the context of habitat mapping, imaging establishes thereby an important bridge technology regarding the spatial resolution and information content between physical sampling gear (e.g., box corer, multi corer) on the one end and hydro-acoustic sensors on the other end of the spectrum of sampling methods. In contrast to other scientific imaging domains, such as digital pathology, there are no protocols and reports available that guide users (often referred to as observers) in the non-trivial process of assigning semantic categories to whole images, regions, or objects of interest (OOI), which is referred to as annotation. These protocols are crucial to facilitate image analysis as a robust scientific method. In this article we will review the past observations in manual Marine Image Annotations (MIA) and provide (a) a guideline for collecting manual annotations, (b) definitions for annotation quality, and (c) a statistical framework to analyze the performance of human expert annotations and to compare those to computational approaches

    Compact-morphology-based poly-metallic nodule delineation

    Get PDF
    Poly-metallic nodules are a marine resource considered for deep sea mining. Assessing nodule abundance is of interest for mining companies and to monitor potential environmental impact. Optical seafloor imaging allows quantifying poly-metallic nodule abundance at spatial scales from centimetres to square kilometres. Towed cameras and diving robots acquire high-resolution imagery that allow detecting individual nodules and measure their sizes. Spatial abundance statistics can be computed from these size measurements, providing e.g. seafloor coverage in percent and the nodule size distribution. Detecting nodules requires segmentation of nodule pixels from pixels showing sediment background. Semi-supervised pattern recognition has been proposed to automate this task. Existing nodule segmentation algorithms employ machine learning that trains a classifier to segment the nodules in a high-dimensional feature space. Here, a rapid nodule segmentation algorithm is presented. It omits computation-intense feature-based classification and employs image processing only. It exploits a nodule compactness heuristic to delineate individual nodules. Complex machine learning methods are avoided to keep the algorithm simple and fast. The algorithm has successfully been applied to different image datasets. These data sets were acquired by different cameras, camera platforms and in varying illumination conditions. Their successful analysis shows the broad applicability of the proposed method
    corecore