9 research outputs found
Three real-world datasets and neural computational models for classification tasks in patent landscaping
Patent Landscaping, one of the central tasks of intellectual property management, includes selecting and grouping patents according to user-defined technical or application-oriented criteria. While recent transformer-based models have been shown to be effective for classifying patents into taxonomies such as CPC or IPC, there is yet little research on how to support real-world Patent Landscape Studies (PLSs) using natural language processing methods. With this paper, we release three labeled datasets for PLS-oriented classification tasks covering two diverse domains. We provide a qualitative analysis and report detailed corpus statistics.Most research on neural models for patents has been restricted to leveraging titles and abstracts. We compare strong neural and non-neural baselines, proposing a novel model that takes into account textual information from the patentsâ full texts as well as embeddings created based on the patentsâ CPC labels. We find that for PLS-oriented classification tasks, going beyond title and abstract is crucial, CPC labels are an effective source of information, and combining all features yields the best results
Evaluating neural multi-field document representations for patent classification
Patent classification constitutes a long-tailed hierarchical learning problem. Prior work has demonstrated the efficacy of neural representations based on pre-trained transformers, however, due to the limited input size of these models, using only title and abstract of patents as input. Patent documents consist of several textual fields, some of which are quite long. We show that a baseline using simple tf.idf-based methods can easily leverage this additional information. We propose a new architecture combining the neural transformer-based representations of the various fields into a meta-embedding, which we demonstrate to outperform the tf.idf-based counterparts especially on less frequent classes. Using a relatively simple architecture, we outperform the previous state of the art on CPC classification by a margin of 1.2 macro-avg. F1 and 2.6 micro-avg. F1. We identify the textual field giving a âbrief-summaryâ of the patent as most informative with regard to CPC classification, which points to interesting future directions of research on less computation-intensive models, e.g., by summarizing long documents before neural classification
The influence of social status and network structure on consensus building in collaboration networks
Multi-label classification for biomedical literature: an overview of the BioCreative VII LitCovid Track for COVID-19 literature topic annotations
The coronavirus disease 2019 (COVID-19) pandemic has been severely impacting global society since December 2019. The related findings such as vaccine and drug development have been reported in biomedical literatureâat a rate of about 10â000 articles on COVID-19 per month. Such rapid growth significantly challenges manual curation and interpretation. For instance, LitCovid is a literature database of COVID-19-related articles in PubMed, which has accumulated more than 200â000 articles with millions of accesses each month by users worldwide. One primary curation task is to assign up to eight topics (e.g. Diagnosis and Treatment) to the articles in LitCovid. The annotated topics have been widely used for navigating the COVID literature, rapidly locating articles of interest and other downstream studies. However, annotating the topics has been the bottleneck of manual curation. Despite the continuing advances in biomedical text-mining methods, few have been dedicated to topic annotations in COVID-19 literature. To close the gap, we organized the BioCreative LitCovid track to call for a community effort to tackle automated topic annotation for COVID-19 literature. The BioCreative LitCovid datasetâconsisting of over 30â000 articles with manually reviewed topicsâwas created for training and testing. It is one of the largest multi-label classification datasets in biomedical scientific literature. Nineteen teams worldwide participated and made 80 submissions in total. Most teams used hybrid systems based on transformers. The highest performing submissions achieved 0.8875, 0.9181 and 0.9394 for macro-F1-score, micro-F1-score and instance-based F1-score, respectively. Notably, these scores are substantially higher (e.g. 12%, higher for macro F1-score) than the corresponding scores of the state-of-art multi-label classification method. The level of participation and results demonstrate a successful track and help close the gap between dataset curation and method development. The dataset is publicly available via https://ftp.ncbi.nlm.nih.gov/pub/lu/LitCovid/biocreative/ for benchmarking and further development
Neural Patent Classification beyond Title and Abstract: Leveraging Patent Text and Metadata
Intellectual property violations involve substantial litigation and license costs, because of
which patent search is of utmost importance. Over the years, patent corpora have amassed
millions of patents, making manual searches impractical. Patent classification techniques
help domain experts to search and analyze patents. On submission to an examination office,
a patent application is assigned with labels from pre-defined patent taxonomies, e.g., Cooperative
Patent Classification (CPC) and International Patent Classification (IPC). CPC/IPC
classification helps to route patent applications to the correct department and assists in
performing prior art searches. In addition to CPC/IPC classification, we address the classification
task associated with the Patent Landscape Study (PLS), a process that allows
organizations to search patents, categorize them into custom labels, and analyze them to
derive crucial insights. This thesis significantly contributes to the improvement of patent
classification systems by addressing the key challenges described below.
Most of the existing CPC/IPC classification datasets provide only limited texts of the
included patents and are, therefore, insufficient for our experiments. In response to this
issue, we release a CPC classification dataset that includes the full texts of patents. Further,
the unavailability of open-source datasets is a major bottleneck for the automation of PLS.
To address this challenge, we curate, enrich, and release three open-source datasets from
two diverse domains.
Despite CPC/IPC classification being a hierarchical multi-label classification task, most
prior neural models have not considered the hierarchical taxonomy when designing model
architectures and have often predicted labels only for a single level. We make a major contribution
with our memory-efficient model architecture, which shares a single transformerbased
language model across multiple classification heads, one for each label in the taxonomy,
and leverages hierarchical links in the model architecture. We demonstrate that the
proposed technique consistently outperforms baselines, particularly for infrequent labels.
Our analysis shows that the sentences and abstracts of patents are often duplicated,
illustrating the relevance of the full texts of patents to perform classification. However,
transformer-based language models that take 512 or 4,096 tokens as input are insufficient
for patents, which contain 12.5k tokens on average. Motivated by these factors, we make a
major contribution with our document representation technique, which combines truncated
section text embeddings using vector summation, performing better than baselines. In
addition, we propose a sentence ranker and demonstrate that the extractive summarization
techniques are effective in selecting informative sentences for neural representation in the
context of patent classification.
Unlike CPC/IPC classification, in the case of PLS, the CPC/IPC labels are known during
inference. As a major contribution, we enrich the document representation by combining
CPC/IPC labels with patent text to predict PLS-oriented categories, often representing
concepts different from CPC/IPC labels. To demonstrate the broader applicability of the
proposed technique, we apply it to a similar task: classifying research publications into
target categories using text and author-provided keywords as input
Team RobertNLP at the BioCreative VII LitCovid track: neural document classification using SciBERT
This paper describes our submission to the BioCreative VII LitCovid track Multi-label topic classification for COVID-19 literature annotation. Our system generates embeddings for title, abstract, and keywords using the transformer-based pre-trained language model SciBERT. The classification layer consists of several multi-layer perceptrons, each predicting the applicability of a single label. Our approach, originally developed for hierarchical patent classification, shows a strong performance on the LitCovid shared task, outperforming roughly 75% of the participating systems. Keywordsâdocument representation; multi-task learning; multi-label classification
Multi-label classification for biomedical literature: an overview of the BioCreative VII LitCovid Track for COVID-19 literature topic annotations
International audienceAbstract The coronavirus disease 2019 (COVID-19) pandemic has been severely impacting global society since December 2019. The related findings such as vaccine and drug development have been reported in biomedical literatureâat a rate of about 10â000 articles on COVID-19 per month. Such rapid growth significantly challenges manual curation and interpretation. For instance, LitCovid is a literature database of COVID-19-related articles in PubMed, which has accumulated more than 200â000 articles with millions of accesses each month by users worldwide. One primary curation task is to assign up to eight topics (e.g. Diagnosis and Treatment) to the articles in LitCovid. The annotated topics have been widely used for navigating the COVID literature, rapidly locating articles of interest and other downstream studies. However, annotating the topics has been the bottleneck of manual curation. Despite the continuing advances in biomedical text-mining methods, few have been dedicated to topic annotations in COVID-19 literature. To close the gap, we organized the BioCreative LitCovid track to call for a community effort to tackle automated topic annotation for COVID-19 literature. The BioCreative LitCovid datasetâconsisting of over 30â000 articles with manually reviewed topicsâwas created for training and testing. It is one of the largest multi-label classification datasets in biomedical scientific literature. Nineteen teams worldwide participated and made 80 submissions in total. Most teams used hybrid systems based on transformers. The highest performing submissions achieved 0.8875, 0.9181 and 0.9394 for macro-F1-score, micro-F1-score and instance-based F1-score, respectively. Notably, these scores are substantially higher (e.g. 12%, higher for macro F1-score) than the corresponding scores of the state-of-art multi-label classification method. The level of participation and results demonstrate a successful track and help close the gap between dataset curation and method development. The dataset is publicly available via https://ftp.ncbi.nlm.nih.gov/pub/lu/LitCovid/biocreative/ for benchmarking and further development. Database URL https://ftp.ncbi.nlm.nih.gov/pub/lu/LitCovid/biocreative