4,757 research outputs found

    Learning a Policy for Opportunistic Active Learning

    Full text link
    Active learning identifies data points to label that are expected to be the most useful in improving a supervised model. Opportunistic active learning incorporates active learning into interactive tasks that constrain possible queries during interactions. Prior work has shown that opportunistic active learning can be used to improve grounding of natural language descriptions in an interactive object retrieval task. In this work, we use reinforcement learning for such an object retrieval task, to learn a policy that effectively trades off task completion with model improvement that would benefit future tasks.Comment: EMNLP 2018 Camera Read

    An Evaluation of Selection Strategies for Active Learning with Regression

    Get PDF
    While active learning for classification problems has received considerable attention in recent years, studies on problems of regression are rare. This paper provides a systematic review of the most commonly used selection strategies for active learning within the context of linear regression. The recently developed Exploration Guided Active Learning (EGAL) algorithm, previously deployed within a classification context, is explored as a selection strategy for regression problems. Active learning is demonstrated to significantly improve the learning rate of linear regression models. Experimental results show that a purely diversity-based approach t

    Active learning in VAE latent space

    Get PDF

    Re-Benchmarking Pool-Based Active Learning for Binary Classification

    Full text link
    Active learning is a paradigm that significantly enhances the performance of machine learning models when acquiring labeled data is expensive. While several benchmarks exist for evaluating active learning strategies, their findings exhibit some misalignment. This discrepancy motivates us to develop a transparent and reproducible benchmark for the community. Our efforts result in an open-sourced implementation (https://github.com/ariapoy/active-learning-benchmark) that is reliable and extensible for future research. By conducting thorough re-benchmarking experiments, we have not only rectified misconfigurations in existing benchmark but also shed light on the under-explored issue of model compatibility, which directly causes the observed discrepancy. Resolving the discrepancy reassures that the uncertainty sampling strategy of active learning remains an effective and preferred choice for most datasets. Our experience highlights the importance of dedicating research efforts towards re-benchmarking existing benchmarks to produce more credible results and gain deeper insights
    corecore