94,417 research outputs found

    Perception of Motion and Architectural Form: Computational Relationships between Optical Flow and Perspective

    Full text link
    Perceptual geometry refers to the interdisciplinary research whose objectives focuses on study of geometry from the perspective of visual perception, and in turn, applies such geometric findings to the ecological study of vision. Perceptual geometry attempts to answer fundamental questions in perception of form and representation of space through synthesis of cognitive and biological theories of visual perception with geometric theories of the physical world. Perception of form, space and motion are among fundamental problems in vision science. In cognitive and computational models of human perception, the theories for modeling motion are treated separately from models for perception of form.Comment: 10 pages, 13 figures, submitted and accepted in DoCEIS'2012 Conference: http://www.uninova.pt/doceis/doceis12/home/home.ph

    Optical Flow in Mostly Rigid Scenes

    Full text link
    The optical flow of natural scenes is a combination of the motion of the observer and the independent motion of objects. Existing algorithms typically focus on either recovering motion and structure under the assumption of a purely static world or optical flow for general unconstrained scenes. We combine these approaches in an optical flow algorithm that estimates an explicit segmentation of moving objects from appearance and physical constraints. In static regions we take advantage of strong constraints to jointly estimate the camera motion and the 3D structure of the scene over multiple frames. This allows us to also regularize the structure instead of the motion. Our formulation uses a Plane+Parallax framework, which works even under small baselines, and reduces the motion estimation to a one-dimensional search problem, resulting in more accurate estimation. In moving regions the flow is treated as unconstrained, and computed with an existing optical flow method. The resulting Mostly-Rigid Flow (MR-Flow) method achieves state-of-the-art results on both the MPI-Sintel and KITTI-2015 benchmarks.Comment: 15 pages, 10 figures; accepted for publication at CVPR 201

    Three dimensional transparent structure segmentation and multiple 3D motion estimation from monocular perspective image sequences

    Get PDF
    A three dimensional scene can be segmented using different cues, such as boundaries, texture, motion, discontinuities of the optical flow, stereo, models for structure, etc. We investigate segmentation based upon one of these cues, namely three dimensional motion. If the scene contain transparent objects, the two dimensional (local) cues are inconsistent, since neighboring points with similar optical flow can correspond to different objects. We present a method for performing three dimensional motion-based segmentation of (possibly) transparent scenes together with recursive estimation of the motion of each independent rigid object from monocular perspective images. Our algorithm is based on a recently proposed method for rigid motion reconstruction and a validation test which allows us to initialize the scheme and detect outliers during the motion estimation procedure. The scheme is tested on challenging real and synthetic image sequences. Segmentation is performed for the Ullmann's experiment of two transparent cylinders rotating about the same axis in opposite directions

    Vision-Based Navigation III: Pose and Motion from Omnidirectional Optical Flow and a Digital Terrain Map

    Full text link
    An algorithm for pose and motion estimation using corresponding features in omnidirectional images and a digital terrain map is proposed. In previous paper, such algorithm for regular camera was considered. Using a Digital Terrain (or Digital Elevation) Map (DTM/DEM) as a global reference enables recovering the absolute position and orientation of the camera. In order to do this, the DTM is used to formulate a constraint between corresponding features in two consecutive frames. In this paper, these constraints are extended to handle non-central projection, as is the case with many omnidirectional systems. The utilization of omnidirectional data is shown to improve the robustness and accuracy of the navigation algorithm. The feasibility of this algorithm is established through lab experimentation with two kinds of omnidirectional acquisition systems. The first one is polydioptric cameras while the second is catadioptric camera.Comment: 6 pages, 9 figure

    Logarithmic intensity and speckle-based motion contrast methods for human retinal vasculature visualization using swept source optical coherence tomography

    Get PDF
    We formulate a theory to show that the statistics of OCT signal amplitude and intensity are highly dependent on the sample reflectivity strength, motion, and noise power. Our theoretical and experimental results depict the lack of speckle amplitude and intensity contrasts to differentiate regions of motion from static areas. Two logarithmic intensity-based contrasts, logarithmic intensity variance (LOGIV) and differential logarithmic intensity variance (DLOGIV), are proposed for serving as surrogate markers for motion with enhanced sensitivity. Our findings demonstrate a good agreement between the theoretical and experimental results for logarithmic intensity-based contrasts. Logarithmic intensity-based motion and speckle-based contrast methods are validated and compared for in vivo human retinal vasculature visualization using high-speed swept-source optical coherence tomography (SS-OCT) at 1060 nm. The vasculature was identified as regions of motion by creating LOGIV and DLOGIV tomograms: multiple B-scans were collected of individual slices through the retina and the variance of logarithmic intensities and differences of logarithmic intensities were calculated. Both methods captured the small vessels and the meshwork of capillaries associated with the inner retina in en face images over 4 mm^2 in a normal subject

    Object-based 2D-to-3D video conversion for effective stereoscopic content generation in 3D-TV applications

    Get PDF
    Three-dimensional television (3D-TV) has gained increasing popularity in the broadcasting domain, as it enables enhanced viewing experiences in comparison to conventional two-dimensional (2D) TV. However, its application has been constrained due to the lack of essential contents, i.e., stereoscopic videos. To alleviate such content shortage, an economical and practical solution is to reuse the huge media resources that are available in monoscopic 2D and convert them to stereoscopic 3D. Although stereoscopic video can be generated from monoscopic sequences using depth measurements extracted from cues like focus blur, motion and size, the quality of the resulting video may be poor as such measurements are usually arbitrarily defined and appear inconsistent with the real scenes. To help solve this problem, a novel method for object-based stereoscopic video generation is proposed which features i) optical-flow based occlusion reasoning in determining depth ordinal, ii) object segmentation using improved region-growing from masks of determined depth layers, and iii) a hybrid depth estimation scheme using content-based matching (inside a small library of true stereo image pairs) and depth-ordinal based regularization. Comprehensive experiments have validated the effectiveness of our proposed 2D-to-3D conversion method in generating stereoscopic videos of consistent depth measurements for 3D-TV applications

    Volumetric microvascular imaging of human retina using optical coherence tomography with a novel motion contrast technique

    Get PDF
    Phase variance-based motion contrast imaging is demonstrated using a spectral domain optical coherence tomography system for the in vivo human retina. This contrast technique spatially identifies locations of motion within the retina primarily associated with vasculature. Histogram-based noise analysis of the motion contrast images was used to reduce the motion noise created by transverse eye motion. En face summation images created from the 3D motion contrast data are presented with segmentation of selected retinal layers to provide non-invasive vascular visualization comparable to currently used invasive angiographic imaging. This motion contrast technique has demonstrated the ability to visualize resolution-limited vasculature independent of vessel orientation and flow velocity

    Perceiving environmental structure from optical motion

    Get PDF
    Generally speaking, one of the most important sources of optical information about environmental structure is known to be the deforming optical patterns produced by the movements of the observer (pilot) or environmental objects. As an observer moves through a rigid environment, the projected optical patterns of environmental objects are systematically transformed according to their orientations and positions in 3D space relative to those of the observer. The detailed characteristics of these deforming optical patterns carry information about the 3D structure of the objects and about their locations and orientations relative to those of the observer. The specific geometrical properties of moving images that may constitute visually detected information about the shapes and locations of environmental objects is examined
    • …
    corecore