3,935 research outputs found

    Leveraging semantic text analysis to improve the performance of transformer-based relation extraction

    Get PDF
    Keyword extraction from Knowledge Bases underpins the definition of relevancy in Digital Library search systems. However, it is the pertinent task of Joint Relation Extraction, which populates the Knowledge Bases from which results are retrieved. Recent work focuses on fine-tuned, Pre-trained Transformers. Yet, F1 scores for scientific literature achieve just 53.2, versus 69 in the general domain. The research demonstrates the failure of existing work to evidence the rationale for optimisations to finetuned classifiers. In contrast, emerging research subjectively adopts the common belief that Natural Language Processing techniques fail to derive context and shared knowledge. In fact, global context and shared knowledge account for just 10.4% and 11.2% of total relation misclassifications, respectively. In this work, the novel employment of semantic text analysis presents objective challenges for the Transformer-based classification of Joint Relation Extraction. This is the first known work to quantify that pipelined error propagation accounts for 45.3% of total relation misclassifications, the most poignant challenge in this domain. More specifically, Part-of-Speech tagging highlights the misclassification of complex noun phrases, accounting for 25.47% of relation misclassifications. Furthermore, this study identifies two limitations in the purported bidirectionality of the Bidirectional Encoder Representations from Transformers (BERT) Pre-trained Language Model. Firstly, there is a notable imbalance in the misclassification of right-to-left relations, which occurs at a rate double that of left-to-right relations. Additionally, a failure to recognise local context through determiners and prepositions contributes to 16.04% of misclassifications. Furthermore, it is highlighted that the annotation scheme of the singular dataset utilised in existing research, Scientific Entities, Relations and Coreferences (SciERC), is marred by ambiguity. Notably, two asymmetric relations within this dataset achieve recall rates of only 10% and 29

    Stress Testing BERT Anaphora Resolution Models for Reaction Extraction in Chemical Patents

    Full text link
    The high volume of published chemical patents and the importance of a timely acquisition of their information gives rise to automating information extraction from chemical patents. Anaphora resolution is an important component of comprehensive information extraction, and is critical for extracting reactions. In chemical patents, there are five anaphoric relations of interest: co-reference, transformed, reaction associated, work up, and contained. Our goal is to investigate how the performance of anaphora resolution models for reaction texts in chemical patents differs in a noise-free and noisy environment and to what extent we can improve the robustness against noise of the model

    Collective agency:From philosophical and logical perspectives

    Get PDF
    People inhabit a vast and intricate social network nowadays. In addition to our own decisions and actions, we confront those of various groups every day. Collective decisions and actions are more complex and bewildering compared to those made by individuals. As members of a collective, we contribute to its decisions, but our contributions may not always align with the outcome. We may also find ourselves excluded from certain groups and passively subjected to their influences without being aware of the source. We are used to being in overlapping groups and may switch identities, supporting or opposing the claims of particular groups. But rarely do we pause to think: What do we talk about when we talk about groups and their decisions?At the heart of this dissertation is the question of collective agency, i.e., in what sense can we treat a group as a rational agent capable of its action. There are two perspectives we take: a philosophical and logical one. The philosophical perspective mainly discusses the ontological and epistemological issues related to collective agency, sorts out the relevant philosophical history, and argues that the combination of a relational view of collective agency and a dispositional view of collective intentionality provides a rational and realistic account. The logical perspective is associated with formal theories of groups, it disregards the psychological content involved in the philosophical perspective, establishes a logical system that is sufficiently formal and objective, and axiomatizes the nature of a collective

    Automatic Calibration and Error Correction for Large Language Models via Pareto Optimal Self-Supervision

    Full text link
    Large language models (LLMs) have demonstrated remarkable capabilities out of box for a wide range of applications, yet accuracy still remains a major growth area, especially in mission-critical domains such as biomedicine. An effective method to calibrate the confidence level on LLM responses is essential to automatically detect errors and facilitate human-in-the-loop verification. An important source of calibration signals stems from expert-stipulated programmatic supervision, which is often available at low cost but has its own limitations such as noise and coverage. In this paper, we introduce a Pareto optimal self-supervision framework that can leverage available programmatic supervision to systematically calibrate LLM responses by producing a risk score for every response, without any additional manual efforts. This is accomplished by learning a harmonizer model to align LLM output with other available supervision sources, which would assign higher risk scores to more uncertain LLM responses and facilitate error correction. Experiments on standard relation extraction tasks in biomedical and general domains demonstrate the promise of this approach, with our proposed risk scores highly correlated with the real error rate of LLMs. For the most uncertain test instances, dynamic prompting based on our proposed risk scores results in significant accuracy improvement for off-the-shelf LLMs, boosting GPT-3 results past state-of-the-art (SOTA) weak supervision and GPT-4 results past SOTA supervised results on challenging evaluation datasets

    HistRED: A Historical Document-Level Relation Extraction Dataset

    Full text link
    Despite the extensive applications of relation extraction (RE) tasks in various domains, little has been explored in the historical context, which contains promising data across hundreds and thousands of years. To promote the historical RE research, we present HistRED constructed from Yeonhaengnok. Yeonhaengnok is a collection of records originally written in Hanja, the classical Chinese writing, which has later been translated into Korean. HistRED provides bilingual annotations such that RE can be performed on Korean and Hanja texts. In addition, HistRED supports various self-contained subtexts with different lengths, from a sentence level to a document level, supporting diverse context settings for researchers to evaluate the robustness of their RE models. To demonstrate the usefulness of our dataset, we propose a bilingual RE model that leverages both Korean and Hanja contexts to predict relations between entities. Our model outperforms monolingual baselines on HistRED, showing that employing multiple language contexts supplements the RE predictions. The dataset is publicly available at: https://huggingface.co/datasets/Soyoung/HistRED under CC BY-NC-ND 4.0 license

    LabelPrompt: Effective Prompt-based Learning for Relation Classification

    Full text link
    Recently, prompt-based learning has become a very popular solution in many Natural Language Processing (NLP) tasks by inserting a template into model input, which converts the task into a cloze-style one to smoothing out differences between the Pre-trained Language Model (PLM) and the current task. But in the case of relation classification, it is difficult to map the masked output to the relation labels because of its abundant semantic information, e.g. org:founded_by''. Therefore, a pre-trained model still needs enough labelled data to fit the relations. To mitigate this challenge, in this paper, we present a novel prompt-based learning method, namely LabelPrompt, for the relation classification task. It is an extraordinary intuitive approach by a motivation: ``GIVE MODEL CHOICES!''. First, we define some additional tokens to represent the relation labels, which regards these tokens as the verbalizer with semantic initialisation and constructs them with a prompt template method. Then we revisit the inconsistency of the predicted relation and the given entities, an entity-aware module with the thought of contrastive learning is designed to mitigate the problem. At last, we apply an attention query strategy to self-attention layers to resolve two types of tokens, prompt tokens and sequence tokens. The proposed strategy effectively improves the adaptation capability of prompt-based learning in the relation classification task when only a small labelled data is available. Extensive experimental results obtained on several bench-marking datasets demonstrate the superiority of the proposed LabelPrompt method, particularly in the few-shot scenario

    Driving the Technology Value Stream by Analyzing App Reviews

    Get PDF
    An emerging feature of mobile application software is the need to quickly produce new versions to solve problems that emerged in previous versions. This helps adapt to changing user needs and preferences. In a continuous software development process, the user reviews collected by the apps themselves can play a crucial role to detect which components need to be reworked. This paper proposes a novel framework that enables software companies to drive their technology value stream based on the feedback (or reviews) provided by the end-users of an application. The proposed end-to-end framework exploits different Natural Language Processing (NLP) tasks to best understand the needs and goals of the end users. We also provide a thorough and in-depth analysis of the framework, the performance of each of the modules, and the overall contribution in driving the technology value stream. An analysis of reviews with sixteen popular Android Play Store applications from various genres over a long period of time provides encouraging evidence of the effectiveness of the proposed approach
    • …
    corecore