688 research outputs found

    Refractive Structure-From-Motion Through a Flat Refractive Interface

    Get PDF
    Recovering 3D scene geometry from underwater images involves the Refractive Structure-from-Motion (RSfM) problem, where the image distortions caused by light refraction at the interface between different propagation media invalidates the single view point assumption. Direct use of the pinhole camera model in RSfM leads to inaccurate camera pose estimation and consequently drift. RSfM methods have been thoroughly studied for the case of a thick glass interface that assumes two refractive interfaces between the camera and the viewed scene. On the other hand, when the camera lens is in direct contact with the water, there is only one refractive interface. By explicitly considering a refractive interface, we develop a succinct derivation of the refractive fundamental matrix in the form of the generalised epipolar constraint for an axial camera. We use the refractive fundamental matrix to refine initial pose estimates obtained by assuming the pinhole model. This strategy allows us to robustly estimate underwater camera poses, where other methods suffer from poor noise-sensitivity. We also formulate a new four view constraint enforcing camera pose consistency along a video which leads us to a novel RSfM framework. For validation we use synthetic data to show the numerical properties of our method and we provide results on real data to demonstrate performance within laboratory settings and for applications in endoscopy

    Underwater 3D Reconstruction Based on Physical Models for Refraction and Underwater Light Propagation

    Get PDF
    In recent years, underwater imaging has gained a lot of popularity partly due to the availability of off-the-shelf consumer cameras, but also due to a growing interest in the ocean floor by science and industry. Apart from capturing single images or sequences, the application of methods from the area of computer vision has gained interest as well. However, water affects image formation in two major ways. First, while traveling through the water, light is attenuated and scattered, depending on the light's wavelength causing the typical strong green or blue hue in underwater images. Second, cameras used in underwater scenarios need to be confined in an underwater housing, viewing the scene through a flat or dome-shaped glass port. The inside of the housing is filled with air. Consequently, the light entering the housing needs to pass a water-glass interface, then a glass-air interface, thus is refracted twice, affecting underwater image formation geometrically. In classic Structure-from-Motion (SfM) approaches, the perspective camera model is usually assumed, however, it can be shown that it becomes invalid due to refraction in underwater scenarios. Therefore, this thesis proposes an adaptation of the SfM algorithm to underwater image formation with flat port underwater housings, i.e. introduces a method where refraction at the underwater housing is modeled explicitly. This includes a calibration approach, algorithms for relative and absolute pose estimation, an efficient, non-linear error function that is utilized in bundle adjustment, and a refractive plane sweep algorithm. Finally, if calibration data for an underwater light propagation model exists, the dense depth maps can be used to correct texture colors. Experiments with a perspective and the proposed refractive approach to 3D reconstruction revealed that the perspective approach does indeed suffer from a systematic model error depending on the distance between camera and glass and a possible tilt of the glass with respect to the image sensor. The proposed method shows no such systematic error and thus provides more accurate results for underwater image sequences

    Second-harmonic generation with Bessel beams

    Full text link
    We present the results of a numerical simulation tool for modeling the second-harmonic generation (SHG) interaction experienced by a diffracting beam. This code is used to study the simultaneous frequency and spatial profile conversion of a truncated Bessel beam that closely resembles a higher-order mode (HOM) of an optical fiber. SHG with Bessel beams has been investigated in the past and was determined have limited value because it is less efficient than SHG with a Gaussian beam in the undepleted pump regime. This thesis considers, for the first time to the best of our knowledge, whether most of the power from a Bessel-like beam could be converted into a second-harmonic beam (full depletion), as is the case with a Gaussian beam. We study this problem because using HOMs for fiber lasers and amplifiers allows reduced optical intensities, which mitigates nonlinearities, and is one possible way to increase the available output powers of fiber laser systems. The chief disadvantage of using HOM fiber amplifiers is the spatial profile of the output, but this can be transformed as part of the SHG interaction, most notably to a quasi-Gaussian profile when the phase mismatch meets the noncollinear criteria. We predict, based on numerical simulation, that noncollinear SHG (NC-SHG) can simultaneously perform highly efficient (90%) wavelength conversion from 1064 nm to 532 nm, as well as concurrent mode transformation from a truncated Bessel beam to a Gaussian-like beam (94% overlap with a Gaussian) at modest input powers (250 W, peak power or continuous-wave operation). These simulated results reveal two attractive features – the feasibility of efficiently converting HOMs of fibers into Gaussian-like beams, and the ability to simultaneously perform frequency conversion. Combining the high powers that are possible with HOM fiber amplifiers with access to non-traditional wavelengths may offer significant advantages over the state of the art for many important applications, including underwater communications, laser guide stars, and theater projectors

    Visualising scattering underwater acoustic fields using laser Doppler vibrometry

    Get PDF
    Analysis of acoustic wavefronts are important for a number of engineering design, communication and healthrelated reasons, and it is very desirable to be able to understand the interaction of acoustic fields and energy with obstructions. Experimental analysis of acoustic wavefronts in water has traditionally been completed with single or arrays of piezoelectric or magnetostrictive transducers or hydrophones. These have been very successful, but the presence of transducers within the acoustic region can in some circumstances be undesirable. The research reported here, describes the novel application of scanning laser Doppler vibrometry to the analysis of underwater acoustic wavefronts, impinging on circular cross section obstructions. The results demonstrate that this new non-invasive acoustics measurement technique can successfully visualise and measure reflected acoustic fields, diffraction and refraction effects

    Field deployable dynamic lighting system for turbid water imaging

    Get PDF
    Submitted in partial fulfillment of the requirements for the degree of Master of Science at the Massachusetts Institute of Technology and the Woods Hole Oceanographic Institution September 2011The ocean depths provide an ever changing and complex imaging environment. As scientists and researches strive to document and study more remote and optically challenging areas, specifically scatter-limited environments. There is a requirement for new illumination systems that improve both image quality and increase imaging distance. One of the most constraining optical properties to underwater image quality are scattering caused by ocean chemistry and entrained organic material. By reducing the size of the scatter interaction volume, one can immediately improve both the focus (forward scatter limited) and contrast (backscatter limited) of underwater images. This thesis describes a relatively simple, cost-effective and field-deployable low-power dynamic lighting system that minimizes the scatter interaction volume with both subjective and quantifiable improvements in imaging performance

    Optical identification of sea-mines - Gated viewing three-dimensional laser radar

    Get PDF

    A virtual object point model for the calibration of underwater stereo cameras to recover accurate 3D information

    Get PDF
    The focus of this thesis is on recovering accurate 3D information from underwater images. Underwater 3D reconstruction differs significantly from 3D reconstruction in air due to the refraction of light. In this thesis, the concepts of stereo 3D reconstruction in air get extended for underwater environments by an explicit consideration of refractive effects with the aid of a virtual object point model. Within underwater stereo 3D reconstruction, the focus of this thesis is on the refractive calibration of underwater stereo cameras

    Plenoptic Signal Processing for Robust Vision in Field Robotics

    Get PDF
    This thesis proposes the use of plenoptic cameras for improving the robustness and simplicity of machine vision in field robotics applications. Dust, rain, fog, snow, murky water and insufficient light can cause even the most sophisticated vision systems to fail. Plenoptic cameras offer an appealing alternative to conventional imagery by gathering significantly more light over a wider depth of field, and capturing a rich 4D light field structure that encodes textural and geometric information. The key contributions of this work lie in exploring the properties of plenoptic signals and developing algorithms for exploiting them. It lays the groundwork for the deployment of plenoptic cameras in field robotics by establishing a decoding, calibration and rectification scheme appropriate to compact, lenslet-based devices. Next, the frequency-domain shape of plenoptic signals is elaborated and exploited by constructing a filter which focuses over a wide depth of field rather than at a single depth. This filter is shown to reject noise, improving contrast in low light and through attenuating media, while mitigating occluders such as snow, rain and underwater particulate matter. Next, a closed-form generalization of optical flow is presented which directly estimates camera motion from first-order derivatives. An elegant adaptation of this "plenoptic flow" to lenslet-based imagery is demonstrated, as well as a simple, additive method for rendering novel views. Finally, the isolation of dynamic elements from a static background is considered, a task complicated by the non-uniform apparent motion caused by a mobile camera. Two elegant closed-form solutions are presented dealing with monocular time-series and light field image pairs. This work emphasizes non-iterative, noise-tolerant, closed-form, linear methods with predictable and constant runtimes, making them suitable for real-time embedded implementation in field robotics applications

    Plenoptic Signal Processing for Robust Vision in Field Robotics

    Get PDF
    This thesis proposes the use of plenoptic cameras for improving the robustness and simplicity of machine vision in field robotics applications. Dust, rain, fog, snow, murky water and insufficient light can cause even the most sophisticated vision systems to fail. Plenoptic cameras offer an appealing alternative to conventional imagery by gathering significantly more light over a wider depth of field, and capturing a rich 4D light field structure that encodes textural and geometric information. The key contributions of this work lie in exploring the properties of plenoptic signals and developing algorithms for exploiting them. It lays the groundwork for the deployment of plenoptic cameras in field robotics by establishing a decoding, calibration and rectification scheme appropriate to compact, lenslet-based devices. Next, the frequency-domain shape of plenoptic signals is elaborated and exploited by constructing a filter which focuses over a wide depth of field rather than at a single depth. This filter is shown to reject noise, improving contrast in low light and through attenuating media, while mitigating occluders such as snow, rain and underwater particulate matter. Next, a closed-form generalization of optical flow is presented which directly estimates camera motion from first-order derivatives. An elegant adaptation of this "plenoptic flow" to lenslet-based imagery is demonstrated, as well as a simple, additive method for rendering novel views. Finally, the isolation of dynamic elements from a static background is considered, a task complicated by the non-uniform apparent motion caused by a mobile camera. Two elegant closed-form solutions are presented dealing with monocular time-series and light field image pairs. This work emphasizes non-iterative, noise-tolerant, closed-form, linear methods with predictable and constant runtimes, making them suitable for real-time embedded implementation in field robotics applications
    • …
    corecore