5 research outputs found
Semantic-gap-oriented active learning for multilabel image annotation
10.1109/TIP.2011.2180916IEEE Transactions on Image Processing2142354-2360IIPR
Recommended from our members
Multiple Instance Multiple Label Learning with Limited Supervision
In weak supervision learning, label information can be provided at different levels of granularity. For example, in multi-instance multi-label learning, samples are organized into bags and labels for each class are provided at the bag level. For small datasets, this approach offers means of reducing the labeling efforts. However, in the big-data order, even this mid-level labeling granularity can become prohibitively costly. Under this limited supervision, several labeling approaches can be considered to further reduce labeling costs. In semi-supervised learning, only a limited number of bags can be labeled meanwhile a large number of bags remain unlabeled. In partial- or incomplete-label learning, due to labeling policy or labeling challenges, only a subset of the classes can be labeled for each bag. This subset may be small or even an empty set. In active learning, a small number of class labels per bag are available and a limited number of the most informative labels from the unlabeled portion of data are queried in turn during the training phase to update the model efficiently. The goal is to achieve the best classifier in terms of performance with as least as possible number of available labels. All of the aforementioned approaches provide a practical solution for reducing labeling efforts under the multi-instance multi-label learning framework but introduce machine learning challenges both in terms of the required methodology as well as performance limitations for these methods. This work focuses on probabilistic models with exact/approximate but efficient inferences to leverage information from limited supervision data. In many cases, the proposed frameworks outperform, even significantly, the recently state-of-the-art approaches