1,390 research outputs found
Large-scale fine-grained semantic indexing of biomedical literature based on weakly-supervised deep learning
Semantic indexing of biomedical literature is usually done at the level of
MeSH descriptors, representing topics of interest for the biomedical community.
Several related but distinct biomedical concepts are often grouped together in
a single coarse-grained descriptor and are treated as a single topic for
semantic indexing. This study proposes a new method for the automated
refinement of subject annotations at the level of concepts, investigating deep
learning approaches. Lacking labelled data for this task, our method relies on
weak supervision based on concept occurrence in the abstract of an article. The
proposed approach is evaluated on an extended large-scale retrospective
scenario, taking advantage of concepts that eventually become MeSH descriptors,
for which annotations become available in MEDLINE/PubMed. The results suggest
that concept occurrence is a strong heuristic for automated subject annotation
refinement and can be further enhanced when combined with dictionary-based
heuristics. In addition, such heuristics can be useful as weak supervision for
developing deep learning models that can achieve further improvement in some
cases.Comment: 48 pages, 5 figures, 9 tables, 1 algorith
Weakly Supervised Multi-Label Classification of Full-Text Scientific Papers
Instead of relying on human-annotated training samples to build a classifier,
weakly supervised scientific paper classification aims to classify papers only
using category descriptions (e.g., category names, category-indicative
keywords). Existing studies on weakly supervised paper classification are less
concerned with two challenges: (1) Papers should be classified into not only
coarse-grained research topics but also fine-grained themes, and potentially
into multiple themes, given a large and fine-grained label space; and (2) full
text should be utilized to complement the paper title and abstract for
classification. Moreover, instead of viewing the entire paper as a long linear
sequence, one should exploit the structural information such as citation links
across papers and the hierarchy of sections and paragraphs in each paper. To
tackle these challenges, in this study, we propose FUTEX, a framework that uses
the cross-paper network structure and the in-paper hierarchy structure to
classify full-text scientific papers under weak supervision. A network-aware
contrastive fine-tuning module and a hierarchy-aware aggregation module are
designed to leverage the two types of structural signals, respectively.
Experiments on two benchmark datasets demonstrate that FUTEX significantly
outperforms competitive baselines and is on par with fully supervised
classifiers that use 1,000 to 60,000 ground-truth training samples.Comment: 12 pages; Accepted to KDD 2023 (Code:
https://github.com/yuzhimanhua/FUTEX
Enhancing Low-resource Fine-grained Named Entity Recognition by Leveraging Coarse-grained Datasets
Named Entity Recognition (NER) frequently suffers from the problem of
insufficient labeled data, particularly in fine-grained NER scenarios. Although
-shot learning techniques can be applied, their performance tends to
saturate when the number of annotations exceeds several tens of labels. To
overcome this problem, we utilize existing coarse-grained datasets that offer a
large number of annotations. A straightforward approach to address this problem
is pre-finetuning, which employs coarse-grained data for representation
learning. However, it cannot directly utilize the relationships between
fine-grained and coarse-grained entities, although a fine-grained entity type
is likely to be a subcategory of a coarse-grained entity type. We propose a
fine-grained NER model with a Fine-to-Coarse(F2C) mapping matrix to leverage
the hierarchical structure explicitly. In addition, we present an inconsistency
filtering method to eliminate coarse-grained entities that are inconsistent
with fine-grained entity types to avoid performance degradation. Our
experimental results show that our method outperforms both -shot learning
and supervised learning methods when dealing with a small number of
fine-grained annotations.Comment: Accepted to EMNLP 202
- …