17 research outputs found

    Reduction Scheme for Empirical Risk Minimization and Its Applications to Multiple-Instance Learning

    Full text link
    In this paper, we propose a simple reduction scheme for empirical risk minimization (ERM) that preserves empirical Rademacher complexity. The reduction allows us to transfer known generalization bounds and algorithms for ERM to the target learning problems in a straightforward way. In particular, we apply our reduction scheme to the multiple-instance learning (MIL) problem, for which generalization bounds and ERM algorithms have been extensively studied. We show that various learning problems can be reduced to MIL. Examples include top-1 ranking learning, multi-class learning, and labeled and complementarily labeled learning. It turns out that, some of the generalization bounds derived are, despite the simplicity of derivation, incomparable or competitive with the existing bounds. Moreover, in some setting of labeled and complementarily labeled learning, the algorithm derived is the first polynomial-time algorithm
    corecore