91,759 research outputs found

    Geometric Multi-Model Fitting with a Convex Relaxation Algorithm

    Full text link
    We propose a novel method to fit and segment multi-structural data via convex relaxation. Unlike greedy methods --which maximise the number of inliers-- this approach efficiently searches for a soft assignment of points to models by minimising the energy of the overall classification. Our approach is similar to state-of-the-art energy minimisation techniques which use a global energy. However, we deal with the scaling factor (as the number of models increases) of the original combinatorial problem by relaxing the solution. This relaxation brings two advantages: first, by operating in the continuous domain we can parallelize the calculations. Second, it allows for the use of different metrics which results in a more general formulation. We demonstrate the versatility of our technique on two different problems of estimating structure from images: plane extraction from RGB-D data and homography estimation from pairs of images. In both cases, we report accurate results on publicly available datasets, in most of the cases outperforming the state-of-the-art

    Robust Motion Segmentation from Pairwise Matches

    Full text link
    In this paper we address a classification problem that has not been considered before, namely motion segmentation given pairwise matches only. Our contribution to this unexplored task is a novel formulation of motion segmentation as a two-step process. First, motion segmentation is performed on image pairs independently. Secondly, we combine independent pairwise segmentation results in a robust way into the final globally consistent segmentation. Our approach is inspired by the success of averaging methods. We demonstrate in simulated as well as in real experiments that our method is very effective in reducing the errors in the pairwise motion segmentation and can cope with large number of mismatches

    3D Face Reconstruction from Light Field Images: A Model-free Approach

    Full text link
    Reconstructing 3D facial geometry from a single RGB image has recently instigated wide research interest. However, it is still an ill-posed problem and most methods rely on prior models hence undermining the accuracy of the recovered 3D faces. In this paper, we exploit the Epipolar Plane Images (EPI) obtained from light field cameras and learn CNN models that recover horizontal and vertical 3D facial curves from the respective horizontal and vertical EPIs. Our 3D face reconstruction network (FaceLFnet) comprises a densely connected architecture to learn accurate 3D facial curves from low resolution EPIs. To train the proposed FaceLFnets from scratch, we synthesize photo-realistic light field images from 3D facial scans. The curve by curve 3D face estimation approach allows the networks to learn from only 14K images of 80 identities, which still comprises over 11 Million EPIs/curves. The estimated facial curves are merged into a single pointcloud to which a surface is fitted to get the final 3D face. Our method is model-free, requires only a few training samples to learn FaceLFnet and can reconstruct 3D faces with high accuracy from single light field images under varying poses, expressions and lighting conditions. Comparison on the BU-3DFE and BU-4DFE datasets show that our method reduces reconstruction errors by over 20% compared to recent state of the art
    • …
    corecore