1 research outputs found

    Weakly supervised learning for image keypoint matching using graph convolutional networks

    Full text link
    © 2020 Elsevier B.V. Matching between two sets of features from a pair of images is a fundamental and critical step in most computer vision tasks. Existing attempts typically establish a set of putative correspondences with the nearest-neighbor rule in feature spaces and then try to find a subset of reliable matches. However, when there are large camera angles, repetitive structures, and illumination changes existing in the two images of the same scene, recently proposed feature matching approaches do not work well to find good correspondences, especially with a higher proportion of false-positive matches in the putative set. To address these problems, we propose a novel weakly supervised Graph Convolutional Siamese Network Matcher, called GCSNMatcher, to learn the correct correspondences for image feature matching. In particular, GCSNMatcher can directly work on unstructured keypoint sets and further exploit geometric information among sparse interest points by constructing dynamic neighborhood graph structures to enhance the ability of the feature representation of each keypoint. With channel-wised symmetric aggregation operations in our graph convolutional neural networks, the performance of our matcher does not vary under different permutations of unordered keypoint sets. Empirical studies on Yahoo's YFCC100M benchmark dataset demonstrate that our matcher can give a more robust performance for image matching tasks than those state-of-the-art methods, even when it is trained on small datasets
    corecore