236,136 research outputs found

    Annotator: A Generic Active Learning Baseline for LiDAR Semantic Segmentation

    Full text link
    Active learning, a label-efficient paradigm, empowers models to interactively query an oracle for labeling new data. In the realm of LiDAR semantic segmentation, the challenges stem from the sheer volume of point clouds, rendering annotation labor-intensive and cost-prohibitive. This paper presents Annotator, a general and efficient active learning baseline, in which a voxel-centric online selection strategy is tailored to efficiently probe and annotate the salient and exemplar voxel girds within each LiDAR scan, even under distribution shift. Concretely, we first execute an in-depth analysis of several common selection strategies such as Random, Entropy, Margin, and then develop voxel confusion degree (VCD) to exploit the local topology relations and structures of point clouds. Annotator excels in diverse settings, with a particular focus on active learning (AL), active source-free domain adaptation (ASFDA), and active domain adaptation (ADA). It consistently delivers exceptional performance across LiDAR semantic segmentation benchmarks, spanning both simulation-to-real and real-to-real scenarios. Surprisingly, Annotator exhibits remarkable efficiency, requiring significantly fewer annotations, e.g., just labeling five voxels per scan in the SynLiDAR-to-SemanticKITTI task. This results in impressive performance, achieving 87.8% fully-supervised performance under AL, 88.5% under ASFDA, and 94.4% under ADA. We envision that Annotator will offer a simple, general, and efficient solution for label-efficient 3D applications. Project page: https://binhuixie.github.io/annotator-webComment: NeurIPS 2023. Project page at https://binhuixie.github.io/annotator-web

    Hierarchical Subquery Evaluation for Active Learning on a Graph

    Get PDF
    To train good supervised and semi-supervised object classifiers, it is critical that we not waste the time of the human experts who are providing the training labels. Existing active learning strategies can have uneven performance, being efficient on some datasets but wasteful on others, or inconsistent just between runs on the same dataset. We propose perplexity based graph construction and a new hierarchical subquery evaluation algorithm to combat this variability, and to release the potential of Expected Error Reduction. Under some specific circumstances, Expected Error Reduction has been one of the strongest-performing informativeness criteria for active learning. Until now, it has also been prohibitively costly to compute for sizeable datasets. We demonstrate our highly practical algorithm, comparing it to other active learning measures on classification datasets that vary in sparsity, dimensionality, and size. Our algorithm is consistent over multiple runs and achieves high accuracy, while querying the human expert for labels at a frequency that matches their desired time budget.Comment: CVPR 201

    Uncertainty-Aware AI for ECG arrhythmia multi-label classification

    Get PDF
    Machine Learning (ML) models are able to predict a variety of diseases, with performances that can be superior to those achieved by healthcare professionals. However, when implemented in clinical settings as decision support systems, their generalisation capabilities are often compromised, rendering healthcare professionals more susceptible into delivering erroneous diagnostics. This research focuses on uncertainty measures as a key method to abstain from classifying samples with high uncertainty as well as a selection criterion for active learning strategies. For this purpose, it was employed four large public multi-label Electrocardiogram (ECG) databases for the classification of cardiac arrhythmias. Regarding the uncertainty measures, single distribution uncertainty and classical information-theoretic measures of entropy were tested and compared. Thus, three Deep Learning models were developed: a single convolutional neural network and two multiple-models using Monte-Carlo Dropout and Deep Ensemble techniques. When tested with samples from the same database used for training, all models achieved performances higher than 95% for F1-score. However, when tested on an external dataset, their performances dropped to approximately 70%, indicating a probable scenario of dataset shift. The Deep Ensemble model obtained the highest F1-score in both test sets with a maximum difference of 3% from the others. The classification withrejection option increased from a rejection of10% to a range between 30% to 50% depending on the model or uncertainty measure, with the highest rejection rates being obtained on external data. This reveals that external dataset’s classifications have higher uncertainty, also an indication of dataset shift. For the active learning approach, 10% of the highest uncertainty sampleswere used to retrain the models. The performances results increased by almost 5%, suggesting uncertainty as a good selection method. Although there are still challenges to the implementation of ML models, the preliminary studies show that uncertainty quantification is a valuable method for classification with rejection option and active learning approaches under dataset shift conditions.Modelos de aprendizagem automática conseguem prever um leque de doenças, muitas vezes com desempenhos superiores aos obtidos pelos profissionais de saúde. Contudo, quando integrados em ambientes clínicos como sistemas de apoio à decisão, a generalização destes fica comprometida, o que leva a que profissionais de saúde fiquem mais suscetíveis de fornecer diagnósticos incorretos. Deste modo, este projeto foca-se no papel da incerteza na rejeição de classificações com elevada incerteza e na aprendizagem ativa. Quatro bases de dados públicas de sinais ECG multi-label foram utilizadas na classificação de arritmias cardíacas. Relativamente à quantificação da incerteza, foram testadas e comparadas incertezas provenientes das distribuições e da teoria de informação clássica da entropia. Para tal, foram desenvolvidos três tipos de redes neurais convolucionais: um modelo único e dois modelos obtidos através das técnicas de Monte-Carlo Dropout e Deep Ensemble. Quando testados com dados da mesma base de dados de treino, os modelos alcançaram desempenhos superiores a 95% de F1-score. No entanto, quando testados com dados externos, os desempenhos desceram para cerca de 70%, revelando a possibilidade de dataset shift. O modelo Deep Ensemble obteve os melhores resultados em ambos os dados de teste, com uma diferença máxima de 3% em relação aos outros modelos. O threshold de rejeição de 10% em treino aumentou para valores entre 30% a 50%, dependendo do modelo e da medida de incerteza, sendo que as rejeições mais elevadas são obtidas nos dados externos. Isto revela que estes dados têm maior incerteza nas suas classificações, confirmando a presença de dataset shift. Para a abordagem de aprendizagem ativa, 10% de dados com elevada incerteza foram utilizados para retreinar os modelos. O desempenho destes aumentou quase 5%, sugerindo a incerteza como um bom critério de seleção. Apesar de ainda existirem desafios na implementação de modelos de aprendizagem automática, os resultados preliminares revelam que a quantificação da incerteza é um método valioso na classificação com rejeição e na aprendizagem ativa, em condições de dataset shift
    • …
    corecore