375,746 research outputs found
Unsupervised Deep Epipolar Flow for Stationary or Dynamic Scenes
Unsupervised deep learning for optical flow computation has achieved
promising results. Most existing deep-net based methods rely on image
brightness consistency and local smoothness constraint to train the networks.
Their performance degrades at regions where repetitive textures or occlusions
occur. In this paper, we propose Deep Epipolar Flow, an unsupervised optical
flow method which incorporates global geometric constraints into network
learning. In particular, we investigate multiple ways of enforcing the epipolar
constraint in flow estimation. To alleviate a "chicken-and-egg" type of problem
encountered in dynamic scenes where multiple motions may be present, we propose
a low-rank constraint as well as a union-of-subspaces constraint for training.
Experimental results on various benchmarking datasets show that our method
achieves competitive performance compared with supervised methods and
outperforms state-of-the-art unsupervised deep-learning methods.Comment: CVPR 201
Joint Multi-view Unsupervised Feature Selection and Graph Learning
Despite the recent progress, the existing multi-view unsupervised feature
selection methods mostly suffer from two limitations. First, they generally
utilize either cluster structure or similarity structure to guide the feature
selection, neglecting the possibility of a joint formulation with mutual
benefits. Second, they often learn the similarity structure by either global
structure learning or local structure learning, lacking the capability of graph
learning with both global and local structural awareness. In light of this,
this paper presents a joint multi-view unsupervised feature selection and graph
learning (JMVFG) approach. Particularly, we formulate the multi-view feature
selection with orthogonal decomposition, where each target matrix is decomposed
into a view-specific basis matrix and a view-consistent cluster indicator.
Cross-space locality preservation is incorporated to bridge the cluster
structure learning in the projected space and the similarity learning (i.e.,
graph learning) in the original space. Further, a unified objective function is
presented to enable the simultaneous learning of the cluster structure, the
global and local similarity structures, and the multi-view consistency and
inconsistency, upon which an alternating optimization algorithm is developed
with theoretically proved convergence. Extensive experiments demonstrate the
superiority of our approach for both multi-view feature selection and graph
learning tasks
SRFeat: Learning Locally Accurate and Globally Consistent Non-Rigid Shape Correspondence
In this work, we present a novel learning-based framework that combines the
local accuracy of contrastive learning with the global consistency of geometric
approaches, for robust non-rigid matching. We first observe that while
contrastive learning can lead to powerful point-wise features, the learned
correspondences commonly lack smoothness and consistency, owing to the purely
combinatorial nature of the standard contrastive losses. To overcome this
limitation we propose to boost contrastive feature learning with two types of
smoothness regularization that inject geometric information into correspondence
learning. With this novel combination in hand, the resulting features are both
highly discriminative across individual points, and, at the same time, lead to
robust and consistent correspondences, through simple proximity queries. Our
framework is general and is applicable to local feature learning in both the 3D
and 2D domains. We demonstrate the superiority of our approach through
extensive experiments on a wide range of challenging matching benchmarks,
including 3D non-rigid shape correspondence and 2D image keypoint matching.Comment: 3DV 2022. Code and data: https://github.com/craigleili/SRFea
- …