285 research outputs found

    Robust Motion Segmentation from Pairwise Matches

    Full text link
    In this paper we address a classification problem that has not been considered before, namely motion segmentation given pairwise matches only. Our contribution to this unexplored task is a novel formulation of motion segmentation as a two-step process. First, motion segmentation is performed on image pairs independently. Secondly, we combine independent pairwise segmentation results in a robust way into the final globally consistent segmentation. Our approach is inspired by the success of averaging methods. We demonstrate in simulated as well as in real experiments that our method is very effective in reducing the errors in the pairwise motion segmentation and can cope with large number of mismatches

    A clever elimination strategy for efficient minimal solvers

    Full text link
    We present a new insight into the systematic generation of minimal solvers in computer vision, which leads to smaller and faster solvers. Many minimal problem formulations are coupled sets of linear and polynomial equations where image measurements enter the linear equations only. We show that it is useful to solve such systems by first eliminating all the unknowns that do not appear in the linear equations and then extending solutions to the rest of unknowns. This can be generalized to fully non-linear systems by linearization via lifting. We demonstrate that this approach leads to more efficient solvers in three problems of partially calibrated relative camera pose computation with unknown focal length and/or radial distortion. Our approach also generates new interesting constraints on the fundamental matrices of partially calibrated cameras, which were not known before.Comment: 13 pages, 7 figure

    Neighbourhood Consensus Networks

    Get PDF
    We address the problem of finding reliable dense correspondences between a pair of images. This is a challenging task due to strong appearance differences between the corresponding scene elements and ambiguities generated by repetitive patterns. The contributions of this work are threefold. First, inspired by the classic idea of disambiguating feature matches using semi-local constraints, we develop an end-to-end trainable convolutional neural network architecture that identifies sets of spatially consistent matches by analyzing neighbourhood consensus patterns in the 4D space of all possible correspondences between a pair of images without the need for a global geometric model. Second, we demonstrate that the model can be trained effectively from weak supervision in the form of matching and non-matching image pairs without the need for costly manual annotation of point to point correspondences. Third, we show the proposed neighbourhood consensus network can be applied to a range of matching tasks including both category- and instance-level matching, obtaining the state-of-the-art results on the PF Pascal dataset and the InLoc indoor visual localization benchmark.Comment: In Proceedings of the 32nd Conference on Neural Information Processing Systems (NeurIPS 2018

    On the Two-View Geometry of Unsynchronized Cameras

    Full text link
    We present new methods for simultaneously estimating camera geometry and time shift from video sequences from multiple unsynchronized cameras. Algorithms for simultaneous computation of a fundamental matrix or a homography with unknown time shift between images are developed. Our methods use minimal correspondence sets (eight for fundamental matrix and four and a half for homography) and therefore are suitable for robust estimation using RANSAC. Furthermore, we present an iterative algorithm that extends the applicability on sequences which are significantly unsynchronized, finding the correct time shift up to several seconds. We evaluated the methods on synthetic and wide range of real world datasets and the results show a broad applicability to the problem of camera synchronization.Comment: 12 pages, 9 figures, Computer Vision and Pattern Recognition (CVPR) 201

    Beyond Gr\"obner Bases: Basis Selection for Minimal Solvers

    Full text link
    Many computer vision applications require robust estimation of the underlying geometry, in terms of camera motion and 3D structure of the scene. These robust methods often rely on running minimal solvers in a RANSAC framework. In this paper we show how we can make polynomial solvers based on the action matrix method faster, by careful selection of the monomial bases. These monomial bases have traditionally been based on a Gr\"obner basis for the polynomial ideal. Here we describe how we can enumerate all such bases in an efficient way. We also show that going beyond Gr\"obner bases leads to more efficient solvers in many cases. We present a novel basis sampling scheme that we evaluate on a number of problems

    D2-Net: A Trainable CNN for Joint Detection and Description of Local Features

    Full text link
    In this work we address the problem of finding reliable pixel-level correspondences under difficult imaging conditions. We propose an approach where a single convolutional neural network plays a dual role: It is simultaneously a dense feature descriptor and a feature detector. By postponing the detection to a later stage, the obtained keypoints are more stable than their traditional counterparts based on early detection of low-level structures. We show that this model can be trained using pixel correspondences extracted from readily available large-scale SfM reconstructions, without any further annotations. The proposed method obtains state-of-the-art performance on both the difficult Aachen Day-Night localization dataset and the InLoc indoor localization benchmark, as well as competitive performance on other benchmarks for image matching and 3D reconstruction.Comment: Accepted at CVPR 201

    Joint Orientation of Epipoles

    Get PDF
    It is known that epipolar constraint can be augmented with orientation by formulating it in the oriented projective geometry. This oriented epipolar constraint requires knowing the orientations (signs of overall scales) of epipoles and fundamental matrix. The current belief is that these orientations cannot be obtained from the fundamental matrix only and that additional information is needed, typically, a single correct point correspondence. In contrary to this, we show that fundamental matrix alone encodes orientation of epipoles up to their common scale sign. We present two formulations of this fact. The algebraic formulation gives a closed formula to compute the second epipole from fundamental matrix and the first epipole. The geometric formulation is in terms of the conic formed by intersections of corresponding epipolar lines in the common image plane; we show that the epipoles always lie on different antipodal components of the spherical interpretation of this conic. Further, we show that, under mild assumptions, fundamental matrix can discriminate between two classes of mutual position of a pair of directional cameras
    corecore