20,602 research outputs found

    A Robust Quasi-dense Matching Approach for Underwater Images

    Get PDF
    While different techniques for finding dense correspondences in images taken in air have achieved significant success, application of these techniques to underwater imagery still presents a serious challenge, especially in the case of “monocular stereo” when images constituting a stereo pair are acquired asynchronously. This is generally because of the poor image quality which is inherent to imaging in aquatic environments (blurriness, range-dependent brightness and color variations, time-varying water column disturbances, etc.). The goal of this research is to develop a technique resulting in maximal number of successful matches (conjugate points) in two overlapping images. We propose a quasi-dense matching approach which works reliably for underwater imagery. The proposed approach starts with a sparse set of highly robust matches (seeds) and expands pair-wise matches into their neighborhoods. The Adaptive Least Square Matching (ALSM) is used during the search process to establish new matches to increase the robustness of the solution and avoid mismatches. Experiments on a typical underwater image dataset demonstrate promising results

    Neural 3D Mesh Renderer

    Full text link
    For modeling the 3D world behind 2D images, which 3D representation is most appropriate? A polygon mesh is a promising candidate for its compactness and geometric properties. However, it is not straightforward to model a polygon mesh from 2D images using neural networks because the conversion from a mesh to an image, or rendering, involves a discrete operation called rasterization, which prevents back-propagation. Therefore, in this work, we propose an approximate gradient for rasterization that enables the integration of rendering into neural networks. Using this renderer, we perform single-image 3D mesh reconstruction with silhouette image supervision and our system outperforms the existing voxel-based approach. Additionally, we perform gradient-based 3D mesh editing operations, such as 2D-to-3D style transfer and 3D DeepDream, with 2D supervision for the first time. These applications demonstrate the potential of the integration of a mesh renderer into neural networks and the effectiveness of our proposed renderer

    Effects of virtual acoustics on dynamic auditory distance perception

    Get PDF
    Sound propagation encompasses various acoustic phenomena including reverberation. Current virtual acoustic methods, ranging from parametric filters to physically-accurate solvers, can simulate reverberation with varying degrees of fidelity. We investigate the effects of reverberant sounds generated using different propagation algorithms on acoustic distance perception, i.e., how faraway humans perceive a sound source. In particular, we evaluate two classes of methods for real-time sound propagation in dynamic scenes based on parametric filters and ray tracing. Our study shows that the more accurate method shows less distance compression as compared to the approximate, filter-based method. This suggests that accurate reverberation in VR results in a better reproduction of acoustic distances. We also quantify the levels of distance compression introduced by different propagation methods in a virtual environment.Comment: 8 Pages, 7 figure
    corecore