2,794 research outputs found

    Use of stereo camera systems for assessment of rockfish abundance in untrawlable areas and for recording pollock behavior during midwater trawls

    Get PDF
    We describe the application of two types of stereo camera systems in fisheries research, including the design, calibration, analysis techniques, and precision of the data obtained with these systems. The first is a stereo video system deployed by using a quick-responding winch with a live feed to provide species- and size- composition data adequate to produce acoustically based biomass estimates of rockfish. This system was tested on the eastern Bering Sea slope where rockfish were measured. Rockfish sizes were similar to those sampled with a bottom trawl and the relative error in multiple measurements of the same rockfish in multiple still-frame images was small. Measurement errors of up to 5.5% were found on a calibration target of known size. The second system consisted of a pair of still-image digital cameras mounted inside a midwater trawl. Processing of the stereo images allowed fish length, fish orientation in relation to the camera platform, and relative distance of the fish to the trawl netting to be determined. The video system was useful for surveying fish in Alaska, but it could also be used broadly in other situations where it is difficult to obtain species-composition or size-composition information. Likewise, the still-image system could be used for fisheries research to obtain data on size, position, and orientation of fish

    Automated classification of three-dimensional reconstructions of coral reefs using convolutional neural networks

    Get PDF
    © The Author(s), 2020. This article is distributed under the terms of the Creative Commons Attribution License. The definitive version was published in Hopkinson, B. M., King, A. C., Owen, D. P., Johnson-Roberson, M., Long, M. H., & Bhandarkar, S. M. Automated classification of three-dimensional reconstructions of coral reefs using convolutional neural networks. PLoS One, 15(3), (2020): e0230671, doi: 10.1371/journal.pone.0230671.Coral reefs are biologically diverse and structurally complex ecosystems, which have been severally affected by human actions. Consequently, there is a need for rapid ecological assessment of coral reefs, but current approaches require time consuming manual analysis, either during a dive survey or on images collected during a survey. Reef structural complexity is essential for ecological function but is challenging to measure and often relegated to simple metrics such as rugosity. Recent advances in computer vision and machine learning offer the potential to alleviate some of these limitations. We developed an approach to automatically classify 3D reconstructions of reef sections and assessed the accuracy of this approach. 3D reconstructions of reef sections were generated using commercial Structure-from-Motion software with images extracted from video surveys. To generate a 3D classified map, locations on the 3D reconstruction were mapped back into the original images to extract multiple views of the location. Several approaches were tested to merge information from multiple views of a point into a single classification, all of which used convolutional neural networks to classify or extract features from the images, but differ in the strategy employed for merging information. Approaches to merging information entailed voting, probability averaging, and a learned neural-network layer. All approaches performed similarly achieving overall classification accuracies of ~96% and >90% accuracy on most classes. With this high classification accuracy, these approaches are suitable for many ecological applications.This study was funded by grants from the Alfred P. Sloan Foundation (BMH, BR2014-049; https://sloan.org), and the National Science Foundation (MHL, OCE-1657727; https://www.nsf.gov). The funders had no role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript

    Precision and accuracy of fish length measurements obtained with two visual underwater methods

    Get PDF
    During the VITAL cruise in the Bay of Biscay in summer 2002, two devices for measuring the length of swimming fish were tested: 1) a mechanical crown that emitted a pair of parallel laser beams and that was mounted on the main camera and 2) an underwater auto-focus video camera. The precision and accuracy of these devices were compared and the various sources of measurement errors were estimated by repeatedly measuring fixed and mobile objects and live fish. It was found that fish mobility is the main source of error for these devices because they require that the objects to be measured are perpendicular to the field of vision. The best performance was obtained with the laser method where a video-replay of laser spots (projected on fish bodies) carrying real-time size information was used. The auto-focus system performed poorly because of a delay in obtaining focus and because of some technical problems

    Computational Imaging for Shape Understanding

    Get PDF
    Geometry is the essential property of real-world scenes. Understanding the shape of the object is critical to many computer vision applications. In this dissertation, we explore using computational imaging approaches to recover the geometry of real-world scenes. Computational imaging is an emerging technique that uses the co-designs of image hardware and computational software to expand the capacity of traditional cameras. To tackle face recognition in the uncontrolled environment, we study 2D color image and 3D shape to deal with body movement and self-occlusion. Especially, we use multiple RGB-D cameras to fuse the varying pose and register the front face in a unified coordinate system. The deep color feature and geodesic distance feature have been used to complete face recognition. To handle the underwater image application, we study the angular-spatial encoding and polarization state encoding of light rays using computational imaging devices. Specifically, we use the light field camera to tackle the challenging problem of underwater 3D reconstruction. We leverage the angular sampling of the light field for robust depth estimation. We also develop a fast ray marching algorithm to improve the efficiency of the algorithm. To deal with arbitrary reflectance, we investigate polarimetric imaging and develop polarimetric Helmholtz stereopsis that uses reciprocal polarimetric image pairs for high-fidelity 3D surface reconstruction. We formulate new reciprocity and diffuse/specular polarimetric constraints to recover surface depths and normals using an optimization framework. To recover the 3D shape in the unknown and uncontrolled natural illumination, we use two circularly polarized spotlights to boost the polarization cues corrupted by the environment lighting, as well as to provide photometric cues. To mitigate the effect of uncontrolled environment light in photometric constraints, we estimate a lighting proxy map and iteratively refine the normal and lighting estimation. Through expensive experiments on the simulated and real images, we demonstrate that our proposed computational imaging methods outperform traditional imaging approaches

    Map building fusing acoustic and visual information using autonomous underwater vehicles

    Get PDF
    Author Posting. © The Author(s), 2012. This is the author's version of the work. It is posted here by permission of John Wiley & Sons for personal use, not for redistribution. The definitive version was published in Journal of Field Robotics 30 (2013): 763–783, doi:10.1002/rob.21473.We present a system for automatically building 3-D maps of underwater terrain fusing visual data from a single camera with range data from multibeam sonar. The six-degree of freedom location of the camera relative to the navigation frame is derived as part of the mapping process, as are the attitude offsets of the multibeam head and the on-board velocity sensor. The system uses pose graph optimization and the square root information smoothing and mapping framework to simultaneously solve for the robot’s trajectory, the map, and the camera location in the robot’s frame. Matched visual features are treated within the pose graph as images of 3-D landmarks, while multibeam bathymetry submap matches are used to impose relative pose constraints linking robot poses from distinct tracklines of the dive trajectory. The navigation and mapping system presented works under a variety of deployment scenarios, on robots with diverse sensor suites. Results of using the system to map the structure and appearance of a section of coral reef are presented using data acquired by the Seabed autonomous underwater vehicle.The work described herein was funded by the National Science Foundation Censsis ERC under grant number EEC-9986821, and by the National Oceanic and Atmospheric Administration under grant number NA090AR4320129

    Advances in Simultaneous Localization and Mapping in Confined Underwater Environments Using Sonar and Optical Imaging.

    Full text link
    This thesis reports on the incorporation of surface information into a probabilistic simultaneous localization and mapping (SLAM) framework used on an autonomous underwater vehicle (AUV) designed for underwater inspection. AUVs operating in cluttered underwater environments, such as ship hulls or dams, are commonly equipped with Doppler-based sensors, which---in addition to navigation---provide a sparse representation of the environment in the form of a three-dimensional (3D) point cloud. The goal of this thesis is to develop perceptual algorithms that take full advantage of these sparse observations for correcting navigational drift and building a model of the environment. In particular, we focus on three objectives. First, we introduce a novel representation of this 3D point cloud as collections of planar features arranged in a factor graph. This factor graph representation probabalistically infers the spatial arrangement of each planar segment and can effectively model smooth surfaces (such as a ship hull). Second, we show how this technique can produce 3D models that serve as input to our pipeline that produces the first-ever 3D photomosaics using a two-dimensional (2D) imaging sonar. Finally, we propose a model-assisted bundle adjustment (BA) framework that allows for robust registration between surfaces observed from a Doppler sensor and visual features detected from optical images. Throughout this thesis, we show methods that produce 3D photomosaics using a combination of triangular meshes (derived from our SLAM framework or given a-priori), optical images, and sonar images. Overall, the contributions of this thesis greatly increase the accuracy, reliability, and utility of in-water ship hull inspection with AUVs despite the challenges they face in underwater environments. We provide results using the Hovering Autonomous Underwater Vehicle (HAUV) for autonomous ship hull inspection, which serves as the primary testbed for the algorithms presented in this thesis. The sensor payload of the HAUV consists primarily of: a Doppler velocity log (DVL) for underwater navigation and ranging, monocular and stereo cameras, and---for some applications---an imaging sonar.PhDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/120750/1/paulozog_1.pd

    Euclidean reconstruction of natural underwater scenes using optic imagery sequence

    Get PDF
    The development of maritime applications require monitoring, studying and preserving of detailed and close observation on the underwater seafloor and objects. Stereo vision offers advanced technologies to build 3D models from 2D still overlapping images in a relatively inexpensive way. However, while image stereo matching is a necessary step in 3D reconstruction procedure, even the most robust dense matching techniques are not guaranteed to work for underwater images due to the challenging aquatic environment. In this thesis, in addition to a detailed introduction and research on the key components of building 3D models from optic images, a robust modified quasi-dense matching algorithm based on correspondence propagation and adaptive least square matching for underwater images is proposed and applied to some typical underwater image datasets. The experiments demonstrate the robustness and good performance of the proposed matching approach

    Refractive Structure-From-Motion Through a Flat Refractive Interface

    Get PDF
    Recovering 3D scene geometry from underwater images involves the Refractive Structure-from-Motion (RSfM) problem, where the image distortions caused by light refraction at the interface between different propagation media invalidates the single view point assumption. Direct use of the pinhole camera model in RSfM leads to inaccurate camera pose estimation and consequently drift. RSfM methods have been thoroughly studied for the case of a thick glass interface that assumes two refractive interfaces between the camera and the viewed scene. On the other hand, when the camera lens is in direct contact with the water, there is only one refractive interface. By explicitly considering a refractive interface, we develop a succinct derivation of the refractive fundamental matrix in the form of the generalised epipolar constraint for an axial camera. We use the refractive fundamental matrix to refine initial pose estimates obtained by assuming the pinhole model. This strategy allows us to robustly estimate underwater camera poses, where other methods suffer from poor noise-sensitivity. We also formulate a new four view constraint enforcing camera pose consistency along a video which leads us to a novel RSfM framework. For validation we use synthetic data to show the numerical properties of our method and we provide results on real data to demonstrate performance within laboratory settings and for applications in endoscopy
    • …
    corecore