8,383 research outputs found
Image Completion for View Synthesis Using Markov Random Fields and Efficient Belief Propagation
View synthesis is a process for generating novel views from a scene which has
been recorded with a 3-D camera setup. It has important applications in 3-D
post-production and 2-D to 3-D conversion. However, a central problem in the
generation of novel views lies in the handling of disocclusions. Background
content, which was occluded in the original view, may become unveiled in the
synthesized view. This leads to missing information in the generated view which
has to be filled in a visually plausible manner. We present an inpainting
algorithm for disocclusion filling in synthesized views based on Markov random
fields and efficient belief propagation. We compare the result to two
state-of-the-art algorithms and demonstrate a significant improvement in image
quality.Comment: Published version:
http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=673843
Computing the Stereo Matching Cost with a Convolutional Neural Network
We present a method for extracting depth information from a rectified image
pair. We train a convolutional neural network to predict how well two image
patches match and use it to compute the stereo matching cost. The cost is
refined by cross-based cost aggregation and semiglobal matching, followed by a
left-right consistency check to eliminate errors in the occluded regions. Our
stereo method achieves an error rate of 2.61 % on the KITTI stereo dataset and
is currently (August 2014) the top performing method on this dataset.Comment: Conference on Computer Vision and Pattern Recognition (CVPR), June
201
SEGCloud: Semantic Segmentation of 3D Point Clouds
3D semantic scene labeling is fundamental to agents operating in the real
world. In particular, labeling raw 3D point sets from sensors provides
fine-grained semantics. Recent works leverage the capabilities of Neural
Networks (NNs), but are limited to coarse voxel predictions and do not
explicitly enforce global consistency. We present SEGCloud, an end-to-end
framework to obtain 3D point-level segmentation that combines the advantages of
NNs, trilinear interpolation(TI) and fully connected Conditional Random Fields
(FC-CRF). Coarse voxel predictions from a 3D Fully Convolutional NN are
transferred back to the raw 3D points via trilinear interpolation. Then the
FC-CRF enforces global consistency and provides fine-grained semantics on the
points. We implement the latter as a differentiable Recurrent NN to allow joint
optimization. We evaluate the framework on two indoor and two outdoor 3D
datasets (NYU V2, S3DIS, KITTI, Semantic3D.net), and show performance
comparable or superior to the state-of-the-art on all datasets.Comment: Accepted as a spotlight at the International Conference of 3D Vision
(3DV 2017
Probabilistic ToF and Stereo Data Fusion Based on Mixed Pixel Measurement Models
This paper proposes a method for fusing data acquired by a ToF camera and a stereo pair based on a model for depth measurement by ToF cameras which accounts also for depth discontinuity artifacts due to the mixed pixel effect. Such model is exploited within both a ML and a MAP-MRF frameworks for ToF and stereo data fusion. The proposed MAP-MRF framework is characterized by site-dependent range values, a rather important feature since it can be used both to improve the accuracy and to decrease the computational complexity of standard MAP-MRF approaches. This paper, in order to optimize the site dependent global cost function characteristic of the proposed MAP-MRF approach, also introduces an extension to Loopy Belief Propagation which can be used in other contexts. Experimental data validate the proposed ToF measurements model and the effectiveness of the proposed fusion techniques
Explicit modeling on depth-color inconsistency for color-guided depth up-sampling
© 2016 IEEE. Color-guided depth up-sampling is to enhance the resolution of depth map according to the assumption that the depth discontinuity and color image edge at the corresponding location are consistent. Through all methods reported, MRF including its variants is one of major approaches, which has dominated in this area for several years. However, the assumption above is not always true. Solution usually is to adjust the weighting inside smoothness term in MRF model. But there is no any method explicitly considering the inconsistency occurring between depth discontinuity and the corresponding color edge. In this paper, we propose quantitative measurement on such inconsistency and explicitly embed it into weighting value of smoothness term. Such solution has not been reported in the literature. The improved depth up-sampling based on the proposed method is evaluated on Middlebury datasets and ToFMark datasets and demonstrate promising results
New insight on galaxy structure from GALPHAT I. Motivation, methodology, and benchmarks for Sersic models
We introduce a new galaxy image decomposition tool, GALPHAT (GALaxy
PHotometric ATtributes), to provide full posterior probability distributions
and reliable confidence intervals for all model parameters. GALPHAT is designed
to yield a high speed and accurate likelihood computation, using grid
interpolation and Fourier rotation. We benchmark this approach using an
ensemble of simulated Sersic model galaxies over a wide range of observational
conditions: the signal-to-noise ratio S/N, the ratio of galaxy size to the PSF
and the image size, and errors in the assumed PSF; and a range of structural
parameters: the half-light radius and the Sersic index . We
characterise the strength of parameter covariance in Sersic model, which
increases with S/N and , and the results strongly motivate the need for the
full posterior probability distribution in galaxy morphology analyses and later
inferences.
The test results for simulated galaxies successfully demonstrate that, with a
careful choice of Markov chain Monte Carlo algorithms and fast model image
generation, GALPHAT is a powerful analysis tool for reliably inferring
morphological parameters from a large ensemble of galaxies over a wide range of
different observational conditions. (abridged)Comment: Submitted to MNRAS. The submitted version with high resolution
figures can be downloaded from
http://www.astro.umass.edu/~iyoon/GALPHAT/galphat1.pd
- …