3,163 research outputs found
Dual Low-Rank Decompositions for Robust Cross-View Learning
Cross-view data are very popular contemporarily, as different viewpoints or sensors attempt to richly represent data in various views. However, the cross-view data from different views present a significant divergence, that is, cross-view data from the same category have a lower similarity than those in different categories but within the same view. Considering that each cross-view sample is drawn from two intertwined manifold structures, i.e., class manifold and view manifold, in this paper, we propose a robust cross-view learning framework to seek a robust view-invariant low-dimensional space. Specifically, we develop a dual low-rank decomposition technique to unweave those intertwined manifold structures from one another in the learned space. Moreover, we design two discriminative graphs to constrain the dual low-rank decompositions by fully exploring the prior knowledge. Thus, our proposed algorithm is able to capture more within-class knowledge and mitigate the view divergence to obtain a more effective view-invariant feature extractor. Furthermore, our proposed method is very flexible in addressing such a challenging cross-view learning scenario that we only obtain the view information of the training data while with the view information of the evaluation data unknown. Experiments on face and object benchmarks demonstrate the effective performance of our designed model over the state-of-the-art algorithms
A dual framework for low-rank tensor completion
One of the popular approaches for low-rank tensor completion is to use the
latent trace norm regularization. However, most existing works in this
direction learn a sparse combination of tensors. In this work, we fill this gap
by proposing a variant of the latent trace norm that helps in learning a
non-sparse combination of tensors. We develop a dual framework for solving the
low-rank tensor completion problem. We first show a novel characterization of
the dual solution space with an interesting factorization of the optimal
solution. Overall, the optimal solution is shown to lie on a Cartesian product
of Riemannian manifolds. Furthermore, we exploit the versatile Riemannian
optimization framework for proposing computationally efficient trust region
algorithm. The experiments illustrate the efficacy of the proposed algorithm on
several real-world datasets across applications.Comment: Aceepted to appear in Advances of Nueral Information Processing
Systems (NIPS), 2018. A shorter version appeared in the NIPS workshop on
Synergies in Geometric Data Analysis 201
- …