1,088 research outputs found
Sparse Methods for Learning Multiple Subspaces from Large-scale, Corrupted and Imbalanced Data
In many practical applications in machine learning, computer vision, data mining and information retrieval one is confronted with datasets whose intrinsic dimension is much smaller than the dimension of the ambient space. This has given rise to the challenge of effectively learning multiple low-dimensional subspaces from such data. Multi-subspace learning methods based on sparse representation, such as sparse representation based classification (SRC) and sparse subspace clustering (SSC) have become very popular due to their conceptual simplicity and empirical success. However, there have been very limited theoretical explanations for the correctness of such approaches in the literature. Moreover, the applicability of existing algorithms to real world datasets is limited due to their high computational and memory complexity, sensitivity to data corruptions as well as sensitivity to imbalanced data distributions.
This thesis attempts to advance our theoretical understanding of sparse representation based multi-subspace learning methods, as well as develop new algorithms for handling large-scale, corrupted and imbalanced data. The first contribution of this thesis is a theoretical analysis of the correctness of such methods. In our geometric and randomized analysis, we answer important theoretical questions such as the effect of subspace arrangement, data distribution, subspace dimension, data sampling density, and so on.
The second contribution of this thesis is the development of practical subspace clustering algorithms that are able to deal with large-scale, corrupted and imbalanced datasets. To deal with large-scale data, we study different approaches based on active support and divide-and-conquer ideas, and show that these approaches offer a good tradeoff between high accuracy and low running time. To deal with corrupted data, we construct a Markov chain whose stationary distribution can be used to separate between inliers and outliers. Finally, we propose an efficient exemplar selection and subspace clustering method that outperforms traditional methods on imbalanced data
Combining Multiple Clusterings via Crowd Agreement Estimation and Multi-Granularity Link Analysis
The clustering ensemble technique aims to combine multiple clusterings into a
probably better and more robust clustering and has been receiving an increasing
attention in recent years. There are mainly two aspects of limitations in the
existing clustering ensemble approaches. Firstly, many approaches lack the
ability to weight the base clusterings without access to the original data and
can be affected significantly by the low-quality, or even ill clusterings.
Secondly, they generally focus on the instance level or cluster level in the
ensemble system and fail to integrate multi-granularity cues into a unified
model. To address these two limitations, this paper proposes to solve the
clustering ensemble problem via crowd agreement estimation and
multi-granularity link analysis. We present the normalized crowd agreement
index (NCAI) to evaluate the quality of base clusterings in an unsupervised
manner and thus weight the base clusterings in accordance with their clustering
validity. To explore the relationship between clusters, the source aware
connected triple (SACT) similarity is introduced with regard to their common
neighbors and the source reliability. Based on NCAI and multi-granularity
information collected among base clusterings, clusters, and data instances, we
further propose two novel consensus functions, termed weighted evidence
accumulation clustering (WEAC) and graph partitioning with multi-granularity
link analysis (GP-MGLA) respectively. The experiments are conducted on eight
real-world datasets. The experimental results demonstrate the effectiveness and
robustness of the proposed methods.Comment: The MATLAB source code of this work is available at:
https://www.researchgate.net/publication/28197031
Diffusion Model with Clustering-based Conditioning for Food Image Generation
Image-based dietary assessment serves as an efficient and accurate solution
for recording and analyzing nutrition intake using eating occasion images as
input. Deep learning-based techniques are commonly used to perform image
analysis such as food classification, segmentation, and portion size
estimation, which rely on large amounts of food images with annotations for
training. However, such data dependency poses significant barriers to
real-world applications, because acquiring a substantial, diverse, and balanced
set of food images can be challenging. One potential solution is to use
synthetic food images for data augmentation. Although existing work has
explored the use of generative adversarial networks (GAN) based structures for
generation, the quality of synthetic food images still remains subpar. In
addition, while diffusion-based generative models have shown promising results
for general image generation tasks, the generation of food images can be
challenging due to the substantial intra-class variance. In this paper, we
investigate the generation of synthetic food images based on the conditional
diffusion model and propose an effective clustering-based training framework,
named ClusDiff, for generating high-quality and representative food images. The
proposed method is evaluated on the Food-101 dataset and shows improved
performance when compared with existing image generation works. We also
demonstrate that the synthetic food images generated by ClusDiff can help
address the severe class imbalance issue in long-tailed food classification
using the VFN-LT dataset.Comment: Accepted for 31st ACM International Conference on Multimedia: 8th
International Workshop on Multimedia Assisted Dietary Management (MADiMa
2023
- …