3 research outputs found
Exploring the Learning Difficulty of Data Theory and Measure
As learning difficulty is crucial for machine learning (e.g.,
difficulty-based weighting learning strategies), previous literature has
proposed a number of learning difficulty measures. However, no comprehensive
investigation for learning difficulty is available to date, resulting in that
nearly all existing measures are heuristically defined without a rigorous
theoretical foundation. In addition, there is no formal definition of easy and
hard samples even though they are crucial in many studies. This study attempts
to conduct a pilot theoretical study for learning difficulty of samples. First,
a theoretical definition of learning difficulty is proposed on the basis of the
bias-variance trade-off theory on generalization error. Theoretical definitions
of easy and hard samples are established on the basis of the proposed
definition. A practical measure of learning difficulty is given as well
inspired by the formal definition. Second, the properties for learning
difficulty-based weighting strategies are explored. Subsequently, several
classical weighting methods in machine learning can be well explained on
account of explored properties. Third, the proposed measure is evaluated to
verify its reasonability and superiority in terms of several main difficulty
factors. The comparison in these experiments indicates that the proposed
measure significantly outperforms the other measures throughout the
experiments.Comment: Ou Wu is the corresponding author of this wor
Delving Deep into Adversarial Attack for Multi-label Learning
This research mainly focuses on adversarial attacks in multi-label learning. We analyze the shortcomings of existing research in optimization objectives and solution methods. Then, we constructed a new measure for the optimization objects based on the Jaccrd index. Furthermore, we propose a novel approach to solving the optimization problem.</p