174 research outputs found
Unifying context with labeled property graph: A pipeline-based system for comprehensive text representation in NLP
Extracting valuable insights from vast amounts of unstructured digital text presents significant challenges across diverse domains. This research addresses this challenge by proposing a novel pipeline-based system that generates domain-agnostic and task-agnostic text representations. The proposed approach leverages labeled property graphs (LPG) to encode contextual information, facilitating the integration of diverse linguistic elements into a unified representation. The proposed system enables efficient graph-based querying and manipulation by addressing the crucial aspect of comprehensive context modeling and fine-grained semantics. The effectiveness of the proposed system is demonstrated through the implementation of NLP components that operate on LPG-based representations. Additionally, the proposed approach introduces specialized patterns and algorithms to enhance specific NLP tasks, including nominal mention detection, named entity disambiguation, event enrichments, event participant detection, and temporal link detection. The evaluation of the proposed approach, using the MEANTIME corpus comprising manually annotated documents, provides encouraging results and valuable insights into the system\u27s strengths. The proposed pipeline-based framework serves as a solid foundation for future research, aiming to refine and optimize LPG-based graph structures to generate comprehensive and semantically rich text representations, addressing the challenges associated with efficient information extraction and analysis in NLP
Proceedings of the Seventh International Conference Formal Approaches to South Slavic and Balkan languages
Proceedings of the Seventh International Conference Formal Approaches to South Slavic and Balkan Languages publishes 17 papers that were presented at the conference organised in Dubrovnik, Croatia, 4-6 Octobre 2010
Exploring Syntactic Representations in Pre-trained Transformers to Improve Neural Machine Translation by a Fusion of Neural Network Architectures
Neural networks in Machine Translation (MT) engines may not consider deep linguistic knowledge, often resulting in low-quality translations. In order to improve translation quality, this study examines the feasibility of fusing two data augmentation strategies: the explicit syntactic knowledge incorporation and the pre-trained language model BERT.
The study first investigates what BERT knows about syntactic knowledge of the source language sentences before and after MT fine-tuning through syntactic probing experiments, as well as using a Quality Estimation (QE) model and the chi-square test to clarify the correlation between syntactic knowledge of the source language sentences and the quality of translations in the target language. The experimental results show that BERT can explicitly predict different types of dependency relations in source language sentences and exhibit different learning trends, which probes can reveal. Moreover, experiments confirm a correlation between dependency relations in source language sentences and translation quality in MT scenarios, which can somewhat influence translation quality. The dependency relations of the source language sentences frequently appear in low-quality translations are detected. Probes can be linked to those dependency relations, where prediction scores of dependency relations tend to be higher in the middle layer of BERT than those in the top layer.
The study then presents dependency relation prediction experiments to examine what a Graph Attention Network (GAT) learns syntactic dependencies and investigates how it learns such knowledge by different pairs of the number of attention heads and model layers. Additionally, the study examines the potential of incorporating GAT-based syntactic predictions in MT scenarios by comparing GAT with fine-tuned BERT in dependency relations prediction. Based on the paired t-test and prediction scores, GAT outperforms MT-B, a version of BERT specifically fine-tuned for MT. GAT exhibits higher prediction scores for the majority of dependency relations. For some dependency relations, it even outperforms UD-B, a version of BERT specifically fine-tuned for syntactic dependencies. However, GAT faces difficulties in predicting accurately by the quantity and subtype of dependency relations, which can lead to lower prediction scores.
Finally, the study proposes a novel MT architecture of Syntactic knowledge via Graph attention with BERT (SGB) engines and examines how the translation quality changes from various perspectives. The experimental results indicate that the SGB engines can improve low-quality translations across different source language sentence lengths and better recognize the syntactic structure defined by dependency relations of source language sentences based on the QE scores. However, improving translation quality relies on BERT correctly modeling the source language sentences. Otherwise, the syntactic knowledge on the graphs is of limited impact. The prediction scores of GAT for dependency relations can also be linked to improved translation quality. GAT allows some layers of BERT to reconsider the syntactic structures of the source language sentences. Using XLM-R instead of BERT still results in improved translation quality, indicating the efficiency of syntactic knowledge on graphs. These experiments not only show the effectiveness of the proposed strategies but also provide explanations, which bring more inspiration for future fusion that graph neural network modeling linguistic knowledge and pre-trained language models in MT scenarios
Making effective use of healthcare data using data-to-text technology
Healthcare organizations are in a continuous effort to improve health
outcomes, reduce costs and enhance patient experience of care. Data is
essential to measure and help achieving these improvements in healthcare
delivery. Consequently, a data influx from various clinical, financial and
operational sources is now overtaking healthcare organizations and their
patients. The effective use of this data, however, is a major challenge.
Clearly, text is an important medium to make data accessible. Financial reports
are produced to assess healthcare organizations on some key performance
indicators to steer their healthcare delivery. Similarly, at a clinical level,
data on patient status is conveyed by means of textual descriptions to
facilitate patient review, shift handover and care transitions. Likewise,
patients are informed about data on their health status and treatments via
text, in the form of reports or via ehealth platforms by their doctors.
Unfortunately, such text is the outcome of a highly labour-intensive process if
it is done by healthcare professionals. It is also prone to incompleteness,
subjectivity and hard to scale up to different domains, wider audiences and
varying communication purposes. Data-to-text is a recent breakthrough
technology in artificial intelligence which automatically generates natural
language in the form of text or speech from data. This chapter provides a
survey of data-to-text technology, with a focus on how it can be deployed in a
healthcare setting. It will (1) give an up-to-date synthesis of data-to-text
approaches, (2) give a categorized overview of use cases in healthcare, (3)
seek to make a strong case for evaluating and implementing data-to-text in a
healthcare setting, and (4) highlight recent research challenges.Comment: 27 pages, 2 figures, book chapte
On the Robustness of Aspect-based Sentiment Analysis: Rethinking Model, Data, and Training
Aspect-based sentiment analysis (ABSA) aims at automatically inferring the
specific sentiment polarities toward certain aspects of products or services
behind the social media texts or reviews, which has been a fundamental
application to the real-world society. Since the early 2010s, ABSA has achieved
extraordinarily high accuracy with various deep neural models. However,
existing ABSA models with strong in-house performances may fail to generalize
to some challenging cases where the contexts are variable, i.e., low robustness
to real-world environments. In this study, we propose to enhance the ABSA
robustness by systematically rethinking the bottlenecks from all possible
angles, including model, data, and training. First, we strengthen the current
best-robust syntax-aware models by further incorporating the rich external
syntactic dependencies and the labels with aspect simultaneously with a
universal-syntax graph convolutional network. In the corpus perspective, we
propose to automatically induce high-quality synthetic training data with
various types, allowing models to learn sufficient inductive bias for better
robustness. Last, we based on the rich pseudo data perform adversarial training
to enhance the resistance to the context perturbation and meanwhile employ
contrastive learning to reinforce the representations of instances with
contrastive sentiments. Extensive robustness evaluations are conducted. The
results demonstrate that our enhanced syntax-aware model achieves better
robustness performances than all the state-of-the-art baselines. By
additionally incorporating our synthetic corpus, the robust testing results are
pushed with around 10% accuracy, which are then further improved by installing
the advanced training strategies. In-depth analyses are presented for revealing
the factors influencing the ABSA robustness.Comment: Accepted in ACM Transactions on Information System
Modeling information structure in a cross-linguistic perspective
This study makes substantial contributions to both the theoretical and computational treatment of information structure, with a specific focus on creating natural language processing applications such as multilingual machine translation systems. The present study first provides cross-linguistic findings in regards to information structure meanings and markings. Building upon such findings, the current model represents information structure within the HPSG/MRS framework using Individual Constraints. The primary goal of the present study is to create a multilingual grammar model of information structure for the LinGO Grammar Matrix system. The present study explores the construction of a grammar library for creating customized grammar incorporating information structure and illustrates how the information structure-based model improves performance of transfer-based machine translation
Automatic lexicon acquisition from encyclopedia.
Lo, Ka Kan.Thesis (M.Phil.)--Chinese University of Hong Kong, 2007.Includes bibliographical references (leaves 97-104).Abstracts in English and Chinese.Chapter 1 --- Introduction --- p.1Chapter 1.1 --- Motivation --- p.3Chapter 1.2 --- New paradigm in language learning --- p.5Chapter 1.3 --- Semantic Relations --- p.7Chapter 1.4 --- Contribution of this thesis --- p.9Chapter 2 --- Related Work --- p.13Chapter 2.1 --- Theoretical Linguistics --- p.13Chapter 2.1.1 --- Overview --- p.13Chapter 2.1.2 --- Analysis --- p.15Chapter 2.2 --- Computational Linguistics - General Learning --- p.17Chapter 2.3 --- Computational Linguistics - HPSG Lexical Acquisition --- p.20Chapter 2.4 --- Learning approach --- p.22Chapter 3 --- Background --- p.25Chapter 3.1 --- Modeling primitives --- p.26Chapter 3.1.1 --- Feature Structure --- p.26Chapter 3.1.2 --- Word --- p.28Chapter 3.1.3 --- Phrase --- p.35Chapter 3.1.4 --- Clause --- p.36Chapter 3.2 --- Wikipedia Resource --- p.38Chapter 3.2.1 --- Encyclopedia Text --- p.40Chapter 3.3 --- Semantic Relations --- p.40Chapter 4 --- Learning Framework - Syntactic and Semantic --- p.46Chapter 4.1 --- Type feature scoring function --- p.48Chapter 4.2 --- Confidence score of lexical entry --- p.50Chapter 4.3 --- Specialization and Generalization --- p.52Chapter 4.3.1 --- Further Processing --- p.54Chapter 4.3.2 --- Algorithm Outline --- p.54Chapter 4.3.3 --- Algorithm Analysis --- p.55Chapter 4.4 --- Semantic Information --- p.57Chapter 4.4.1 --- Extraction --- p.58Chapter 4.4.2 --- Induction --- p.60Chapter 4.4.3 --- Generalization --- p.63Chapter 4.5 --- Extension with new text documents --- p.65Chapter 4.6 --- Integrating the syntactic and semantic acquisition framework --- p.65Chapter 5 --- Evaluation --- p.68Chapter 5.1 --- Evaluation Metric - English Resource Grammar --- p.68Chapter 5.1.1 --- English Resource Grammar --- p.69Chapter 5.2 --- Experiments --- p.71Chapter 5.2.1 --- Tasks --- p.71Chapter 5.2.2 --- Evaluation Measures --- p.77Chapter 5.2.3 --- Methodologies --- p.78Chapter 5.2.4 --- Corpus Preparation --- p.79Chapter 5.2.5 --- Results --- p.81Chapter 5.3 --- Result Analysis --- p.85Chapter 6 --- Conclusions --- p.95Bibliography --- p.9
- …