1,197 research outputs found
NMF-Based Comprehensive Latent Factor Learning with Multiview da
Multiview representations reveal the latent information of the data from different perspectives, consistency and complementarity. Unlike most multiview learning approaches, which focus only one perspective, in this paper, we propose a novel unsupervised multiview learning algorithm, called comprehensive latent factor learning (CLFL), which jointly exploits both consistent and complementary information among multiple views. CLFL adopts a non-negative matrix factorization based formulation to learn the latent factors. It learns the weights of different views automatically which makes the representation more accurate. Experiment results on a synthetic and several real datasets demonstrate the effectiveness of our approach
Sparse Modeling for Image and Vision Processing
In recent years, a large amount of multi-disciplinary research has been
conducted on sparse models and their applications. In statistics and machine
learning, the sparsity principle is used to perform model selection---that is,
automatically selecting a simple model among a large collection of them. In
signal processing, sparse coding consists of representing data with linear
combinations of a few dictionary elements. Subsequently, the corresponding
tools have been widely adopted by several scientific communities such as
neuroscience, bioinformatics, or computer vision. The goal of this monograph is
to offer a self-contained view of sparse modeling for visual recognition and
image processing. More specifically, we focus on applications where the
dictionary is learned and adapted to data, yielding a compact representation
that has been successful in various contexts.Comment: 205 pages, to appear in Foundations and Trends in Computer Graphics
and Visio
Cross-Domain Grouping and Alignment for Domain Adaptive Semantic Segmentation
Existing techniques to adapt semantic segmentation networks across the source
and target domains within deep convolutional neural networks (CNNs) deal with
all the samples from the two domains in a global or category-aware manner. They
do not consider an inter-class variation within the target domain itself or
estimated category, providing the limitation to encode the domains having a
multi-modal data distribution. To overcome this limitation, we introduce a
learnable clustering module, and a novel domain adaptation framework called
cross-domain grouping and alignment. To cluster the samples across domains with
an aim to maximize the domain alignment without forgetting precise segmentation
ability on the source domain, we present two loss functions, in particular, for
encouraging semantic consistency and orthogonality among the clusters. We also
present a loss so as to solve a class imbalance problem, which is the other
limitation of the previous methods. Our experiments show that our method
consistently boosts the adaptation performance in semantic segmentation,
outperforming the state-of-the-arts on various domain adaptation settings.Comment: AAAI 202
New Approaches in Multi-View Clustering
Many real-world datasets can be naturally described by multiple views. Due to this, multi-view learning has drawn much attention from both academia and industry. Compared to single-view learning, multi-view learning has demonstrated plenty of advantages. Clustering has long been serving as a critical technique in data mining and machine learning. Recently, multi-view clustering has achieved great success in various applications. To provide a comprehensive review of the typical multi-view clustering methods and their corresponding recent developments, this chapter summarizes five kinds of popular clustering methods and their multi-view learning versions, which include k-means, spectral clustering, matrix factorization, tensor decomposition, and deep learning. These clustering methods are the most widely employed algorithms for single-view data, and lots of efforts have been devoted to extending them for multi-view clustering. Besides, many other multi-view clustering methods can be unified into the frameworks of these five methods. To promote further research and development of multi-view clustering, some popular and open datasets are summarized in two categories. Furthermore, several open issues that deserve more exploration are pointed out in the end
DealMVC: Dual Contrastive Calibration for Multi-view Clustering
Benefiting from the strong view-consistent information mining capacity,
multi-view contrastive clustering has attracted plenty of attention in recent
years. However, we observe the following drawback, which limits the clustering
performance from further improvement. The existing multi-view models mainly
focus on the consistency of the same samples in different views while ignoring
the circumstance of similar but different samples in cross-view scenarios. To
solve this problem, we propose a novel Dual contrastive calibration network for
Multi-View Clustering (DealMVC). Specifically, we first design a fusion
mechanism to obtain a global cross-view feature. Then, a global contrastive
calibration loss is proposed by aligning the view feature similarity graph and
the high-confidence pseudo-label graph. Moreover, to utilize the diversity of
multi-view information, we propose a local contrastive calibration loss to
constrain the consistency of pair-wise view features. The feature structure is
regularized by reliable class information, thus guaranteeing similar samples
have similar features in different views. During the training procedure, the
interacted cross-view feature is jointly optimized at both local and global
levels. In comparison with other state-of-the-art approaches, the comprehensive
experimental results obtained from eight benchmark datasets provide substantial
validation of the effectiveness and superiority of our algorithm. We release
the code of DealMVC at https://github.com/xihongyang1999/DealMVC on GitHub
- …