127 research outputs found
Sharp Generalization of Transductive Learning: A Transductive Local Rademacher Complexity Approach
We introduce a new tool, Transductive Local Complexity (TLC), designed to
analyze the generalization performance of transductive learning methods and
inspire the development of new algorithms in this domain. Our work extends the
concept of the popular Local Rademacher Complexity (LRC) to the transductive
setting, incorporating significant and novel modifications compared to the
typical analysis of LRC methods in the inductive setting. While LRC has been
widely used as a powerful tool for analyzing inductive models, providing sharp
generalization bounds for classification and minimax rates for nonparametric
regression, it remains an open question whether a localized Rademacher
complexity-based tool can be developed for transductive learning. Our goal is
to achieve sharp bounds for transductive learning that align with the inductive
excess risk bounds established by LRC. We provide a definitive answer to this
open problem with the introduction of TLC. We construct TLC by first
establishing a novel and sharp concentration inequality for the supremum of a
test-train empirical processes. Using a peeling strategy and a new surrogate
variance operator, we derive the a novel excess risk bound in the transductive
setting which is consistent with the classical LRC-based excess risk bound in
the inductive setting. As an application of TLC, we employ this new tool to
analyze the Transductive Kernel Learning (TKL) model, deriving sharper excess
risk bounds than those provided by the current state-of-the-art under the same
assumptions. Additionally, the concentration inequality for the test-train
process is employed to derive a sharp concentration inequality for the general
supremum of empirical processes involving random variables in the setting of
uniform sampling without replacement. The sharpness of our derived bound is
compared to existing concentration inequalities under the same conditions.Comment: We use the key results in v1 of this paper (2309.16858v1), especially
the novel surrogate variance operator for the function class in v1, to polish
the results about the concentration inequality for the test-train process and
the subsequent excess risk bounds for transductive learnin
Designing A Composite Dictionary Adaptively From Joint Examples
We study the complementary behaviors of external and internal examples in
image restoration, and are motivated to formulate a composite dictionary design
framework. The composite dictionary consists of the global part learned from
external examples, and the sample-specific part learned from internal examples.
The dictionary atoms in both parts are further adaptively weighted to emphasize
their model statistics. Experiments demonstrate that the joint utilization of
external and internal examples leads to substantial improvements, with
successful applications in image denoising and super resolution
Low-Rank Graph Contrastive Learning for Node Classification
Graph Neural Networks (GNNs) have been widely used to learn node
representations and with outstanding performance on various tasks such as node
classification. However, noise, which inevitably exists in real-world graph
data, would considerably degrade the performance of GNNs revealed by recent
studies. In this work, we propose a novel and robust GNN encoder, Low-Rank
Graph Contrastive Learning (LR-GCL). Our method performs transductive node
classification in two steps. First, a low-rank GCL encoder named LR-GCL is
trained by prototypical contrastive learning with low-rank regularization.
Next, using the features produced by LR-GCL, a linear transductive
classification algorithm is used to classify the unlabeled nodes in the graph.
Our LR-GCL is inspired by the low frequency property of the graph data and its
labels, and it is also theoretically motivated by our sharp generalization
bound for transductive learning. To the best of our knowledge, our theoretical
result is among the first to theoretically demonstrate the advantage of
low-rank learning in graph contrastive learning supported by strong empirical
performance. Extensive experiments on public benchmarks demonstrate the
superior performance of LR-GCL and the robustness of the learned node
representations. The code of LR-GCL is available at
\url{https://anonymous.4open.science/r/Low-Rank_Graph_Contrastive_Learning-64A6/}.Comment: arXiv admin note: text overlap with arXiv:2205.1410
- …