We introduce a new tool, Transductive Local Complexity (TLC), designed to
analyze the generalization performance of transductive learning methods and
inspire the development of new algorithms in this domain. Our work extends the
concept of the popular Local Rademacher Complexity (LRC) to the transductive
setting, incorporating significant and novel modifications compared to the
typical analysis of LRC methods in the inductive setting. While LRC has been
widely used as a powerful tool for analyzing inductive models, providing sharp
generalization bounds for classification and minimax rates for nonparametric
regression, it remains an open question whether a localized Rademacher
complexity-based tool can be developed for transductive learning. Our goal is
to achieve sharp bounds for transductive learning that align with the inductive
excess risk bounds established by LRC. We provide a definitive answer to this
open problem with the introduction of TLC. We construct TLC by first
establishing a novel and sharp concentration inequality for the supremum of a
test-train empirical processes. Using a peeling strategy and a new surrogate
variance operator, we derive the a novel excess risk bound in the transductive
setting which is consistent with the classical LRC-based excess risk bound in
the inductive setting. As an application of TLC, we employ this new tool to
analyze the Transductive Kernel Learning (TKL) model, deriving sharper excess
risk bounds than those provided by the current state-of-the-art under the same
assumptions. Additionally, the concentration inequality for the test-train
process is employed to derive a sharp concentration inequality for the general
supremum of empirical processes involving random variables in the setting of
uniform sampling without replacement. The sharpness of our derived bound is
compared to existing concentration inequalities under the same conditions.Comment: We use the key results in v1 of this paper (2309.16858v1), especially
the novel surrogate variance operator for the function class in v1, to polish
the results about the concentration inequality for the test-train process and
the subsequent excess risk bounds for transductive learnin