17,605 research outputs found
Semi-supervised Deep Multi-view Stereo
Significant progress has been witnessed in learning-based Multi-view Stereo
(MVS) under supervised and unsupervised settings. To combine their respective
merits in accuracy and completeness, meantime reducing the demand for expensive
labeled data, this paper explores the problem of learning-based MVS in a
semi-supervised setting that only a tiny part of the MVS data is attached with
dense depth ground truth. However, due to huge variation of scenarios and
flexible settings in views, it may break the basic assumption in classic
semi-supervised learning, that unlabeled data and labeled data share the same
label space and data distribution, named as semi-supervised distribution-gap
ambiguity in the MVS problem. To handle these issues, we propose a novel
semi-supervised distribution-augmented MVS framework, namely SDA-MVS. For the
simple case that the basic assumption works in MVS data, consistency
regularization encourages the model predictions to be consistent between
original sample and randomly augmented sample. For further troublesome case
that the basic assumption is conflicted in MVS data, we propose a novel style
consistency loss to alleviate the negative effect caused by the distribution
gap. The visual style of unlabeled sample is transferred to labeled sample to
shrink the gap, and the model prediction of generated sample is further
supervised with the label in original labeled sample. The experimental results
in semi-supervised settings of multiple MVS datasets show the superior
performance of the proposed method. With the same settings in backbone network,
our proposed SDA-MVS outperforms its fully-supervised and unsupervised
baselines.Comment: This paper is accepted in ACMMM-2023. The code is released at:
https://github.com/ToughStoneX/Semi-MV
A Unifying Framework in Vector-valued Reproducing Kernel Hilbert Spaces for Manifold Regularization and Co-Regularized Multi-view Learning
This paper presents a general vector-valued reproducing kernel Hilbert spaces
(RKHS) framework for the problem of learning an unknown functional dependency
between a structured input space and a structured output space. Our formulation
encompasses both Vector-valued Manifold Regularization and Co-regularized
Multi-view Learning, providing in particular a unifying framework linking these
two important learning approaches. In the case of the least square loss
function, we provide a closed form solution, which is obtained by solving a
system of linear equations. In the case of Support Vector Machine (SVM)
classification, our formulation generalizes in particular both the binary
Laplacian SVM to the multi-class, multi-view settings and the multi-class
Simplex Cone SVM to the semi-supervised, multi-view settings. The solution is
obtained by solving a single quadratic optimization problem, as in standard
SVM, via the Sequential Minimal Optimization (SMO) approach. Empirical results
obtained on the task of object recognition, using several challenging datasets,
demonstrate the competitiveness of our algorithms compared with other
state-of-the-art methods.Comment: 72 page
- …