65,028 research outputs found
Assessing similarity of feature selection techniques in high-dimensional domains
Recent research efforts attempt to combine multiple feature selection techniques instead of using a single one. However, this combination is often made on an “ad hoc” basis, depending on the specific problem at hand, without considering the degree of diversity/similarity of the involved methods. Moreover, though it is recognized that different techniques may return quite dissimilar outputs, especially in high dimensional/small sample size domains, few direct comparisons exist that quantify these differences and their implications on classification performance. This paper aims to provide a contribution in this direction by proposing a general methodology for assessing the similarity between the outputs of different feature selection methods in high dimensional classification problems. Using as benchmark the genomics domain, an empirical study has been conducted to compare some of the most popular feature selection methods, and useful insight has been obtained about their pattern of agreement
Committee-Based Sample Selection for Probabilistic Classifiers
In many real-world learning tasks, it is expensive to acquire a sufficient
number of labeled examples for training. This paper investigates methods for
reducing annotation cost by `sample selection'. In this approach, during
training the learning program examines many unlabeled examples and selects for
labeling only those that are most informative at each stage. This avoids
redundantly labeling examples that contribute little new information. Our work
follows on previous research on Query By Committee, extending the
committee-based paradigm to the context of probabilistic classification. We
describe a family of empirical methods for committee-based sample selection in
probabilistic classification models, which evaluate the informativeness of an
example by measuring the degree of disagreement between several model variants.
These variants (the committee) are drawn randomly from a probability
distribution conditioned by the training set labeled so far. The method was
applied to the real-world natural language processing task of stochastic
part-of-speech tagging. We find that all variants of the method achieve a
significant reduction in annotation cost, although their computational
efficiency differs. In particular, the simplest variant, a two member committee
with no parameters to tune, gives excellent results. We also show that sample
selection yields a significant reduction in the size of the model used by the
tagger
Minimizing Supervision in Multi-label Categorization
Multiple categories of objects are present in most images. Treating this as a
multi-class classification is not justified. We treat this as a multi-label
classification problem. In this paper, we further aim to minimize the
supervision required for providing supervision in multi-label classification.
Specifically, we investigate an effective class of approaches that associate a
weak localization with each category either in terms of the bounding box or
segmentation mask. Doing so improves the accuracy of multi-label
categorization. The approach we adopt is one of active learning, i.e.,
incrementally selecting a set of samples that need supervision based on the
current model, obtaining supervision for these samples, retraining the model
with the additional set of supervised samples and proceeding again to select
the next set of samples. A crucial concern is the choice of the set of samples.
In doing so, we provide a novel insight, and no specific measure succeeds in
obtaining a consistently improved selection criterion. We, therefore, provide a
selection criterion that consistently improves the overall baseline criterion
by choosing the top k set of samples for a varied set of criteria. Using this
criterion, we are able to show that we can retain more than 98% of the fully
supervised performance with just 20% of samples (and more than 96% using 10%)
of the dataset on PASCAL VOC 2007 and 2012. Also, our proposed approach
consistently outperforms all other baseline metrics for all benchmark datasets
and model combinations.Comment: Accepted in CVPR-W 202
- …