5,312 research outputs found
3D Capturing with Monoscopic Camera
This article presents a new concept of using the auto-focus function of the monoscopic camera sensor to estimate depth map information, which avoids not only using auxiliary equipment or human interaction, but also the introduced computational complexity of SfM or depth analysis. The system architecture that supports both stereo image and video data capturing, processing and display is discussed. A novel stereo image pair generation algorithm by using Z-buffer-based 3D surface recovery is proposed. Based on the depth map, we are able to calculate the disparity map (the distance in pixels between the image points in both views) for the image. The presented algorithm uses a single image with depth information (e.g. z-buffer) as an input and produces two images for left and right eye
Improved Depth Map Estimation from Stereo Images based on Hybrid Method
In this paper, a stereo matching algorithm based on image segments is presented. We propose the hybrid segmentation algorithm that is based on a combination of the Belief Propagation and Mean Shift algorithms with aim to refine the disparity and depth map by using a stereo pair of images. This algorithm utilizes image filtering and modified SAD (Sum of Absolute Differences) stereo matching method. Firstly, a color based segmentation method is applied for segmenting the left image of the input stereo pair (reference image) into regions. The aim of the segmentation is to simplify representation of the image into the form that is easier to analyze and is able to locate objects in images. Secondly, results of the segmentation are used as an input of the local window-based matching method to determine the disparity estimate of each image pixel. The obtained experimental results demonstrate that the final depth map can be obtained by application of segment disparities to the original images. Experimental results with the stereo testing images show that our proposed Hybrid algorithm HSAD gives a good performance
Maximum likelihood estimation of cloud height from multi-angle satellite imagery
We develop a new estimation technique for recovering depth-of-field from
multiple stereo images. Depth-of-field is estimated by determining the shift in
image location resulting from different camera viewpoints. When this shift is
not divisible by pixel width, the multiple stereo images can be combined to
form a super-resolution image. By modeling this super-resolution image as a
realization of a random field, one can view the recovery of depth as a
likelihood estimation problem. We apply these modeling techniques to the
recovery of cloud height from multiple viewing angles provided by the MISR
instrument on the Terra Satellite. Our efforts are focused on a two layer cloud
ensemble where both layers are relatively planar, the bottom layer is optically
thick and textured, and the top layer is optically thin. Our results
demonstrate that with relative ease, we get comparable estimates to the M2
stereo matcher which is the same algorithm used in the current MISR standard
product (details can be found in [IEEE Transactions on Geoscience and Remote
Sensing 40 (2002) 1547--1559]). Moreover, our techniques provide the
possibility of modeling all of the MISR data in a unified way for cloud height
estimation. Research is underway to extend this framework for fast, quality
global estimates of cloud height.Comment: Published in at http://dx.doi.org/10.1214/09-AOAS243 the Annals of
Applied Statistics (http://www.imstat.org/aoas/) by the Institute of
Mathematical Statistics (http://www.imstat.org
- …