257 research outputs found
Adaptive structure concept factorization for multiview clustering
Most existing multiview clustering methods require that graph matrices in different views are computed beforehand and that each graph is obtained independently. However, this requirement ignores the correlation between multiple views. In this letter, we tackle the problem of multiview clustering by jointly optimizing the graph matrix to make full use of the data correlation between views. With the interview correlation, a concept factorization–based multiview clustering method is developed for data integration, and the adaptive method correlates the affinity weights of all views. This method differs from nonnegative matrix factorization–based clustering methods in that it can be applicable to data sets containing negative values. Experiments are conducted to demonstrate the effectiveness of the proposed method in comparison with state-of-the-art approaches in terms of accuracy, normalized mutual information, and purity
Unsupervised Learning of Complex Articulated Kinematic Structures combining Motion and Skeleton Information
In this paper we present a novel framework for unsupervised kinematic structure learning of complex articulated objects from a single-view image sequence. In contrast to prior motion information based methods, which estimate relatively simple articulations, our method can generate arbitrarily complex kinematic structures with skeletal topology by a successive iterative merge process. The iterative merge process is guided by a skeleton distance function which is generated from a novel object boundary generation method from sparse points. Our main contributions can be summarised as follows: (i) Unsupervised complex articulated kinematic structure learning by combining motion and skeleton information. (ii) Iterative fine-to-coarse merging strategy for adaptive motion segmentation and structure smoothing. (iii) Skeleton estimation from sparse feature points. (iv) A new highly articulated object dataset containing multi-stage complexity with ground truth. Our experiments show that the proposed method out-performs state-of-the-art methods both quantitatively and qualitatively
Self-supervised Multi-view Stereo via Effective Co-Segmentation and Data-Augmentation
Recent studies have witnessed that self-supervised methods based on view
synthesis obtain clear progress on multi-view stereo (MVS). However, existing
methods rely on the assumption that the corresponding points among different
views share the same color, which may not always be true in practice. This may
lead to unreliable self-supervised signal and harm the final reconstruction
performance. To address the issue, we propose a framework integrated with more
reliable supervision guided by semantic co-segmentation and data-augmentation.
Specially, we excavate mutual semantic from multi-view images to guide the
semantic consistency. And we devise effective data-augmentation mechanism which
ensures the transformation robustness by treating the prediction of regular
samples as pseudo ground truth to regularize the prediction of augmented
samples. Experimental results on DTU dataset show that our proposed methods
achieve the state-of-the-art performance among unsupervised methods, and even
compete on par with supervised methods. Furthermore, extensive experiments on
Tanks&Temples dataset demonstrate the effective generalization ability of the
proposed method.Comment: This paper is accepted by AAAI-21 with a Distinguished Paper Awar
- …