773 research outputs found
A Survey on Semantic Processing Techniques
Semantic processing is a fundamental research domain in computational
linguistics. In the era of powerful pre-trained language models and large
language models, the advancement of research in this domain appears to be
decelerating. However, the study of semantics is multi-dimensional in
linguistics. The research depth and breadth of computational semantic
processing can be largely improved with new technologies. In this survey, we
analyzed five semantic processing tasks, e.g., word sense disambiguation,
anaphora resolution, named entity recognition, concept extraction, and
subjectivity detection. We study relevant theoretical research in these fields,
advanced methods, and downstream applications. We connect the surveyed tasks
with downstream applications because this may inspire future scholars to fuse
these low-level semantic processing tasks with high-level natural language
processing tasks. The review of theoretical research may also inspire new tasks
and technologies in the semantic processing domain. Finally, we compare the
different semantic processing techniques and summarize their technical trends,
application trends, and future directions.Comment: Published at Information Fusion, Volume 101, 2024, 101988, ISSN
1566-2535. The equal contribution mark is missed in the published version due
to the publication policies. Please contact Prof. Erik Cambria for detail
Cross-lingual Coreference Resolution of Pronouns
This work is, to our knowledge, a first attempt at a machine learning approach to cross-lingual
coreference resolution, i.e. coreference resolution (CR) performed on a bitext. Focusing on CR of English pronouns, we leverage language differences and enrich the feature set of a standard monolingual CR system for English with features extracted from the Czech side of the bitext. Our work also includes a supervised pronoun aligner that outperforms a GIZA++ baseline in terms of both intrinsic evaluation and evaluation on CR. The final cross-lingual CR system has successfully outperformed both a monolingual CR and a cross-lingual projection system
Cross-Lingual Zero Pronoun Resolution
In languages like Arabic, Chinese, Italian, Japanese, Korean, Portuguese, Spanish, and many others, predicate arguments in certainsyntactic positions are not realized instead of being realized as overt pronouns, and are thus called zero- or null-pronouns. Identifyingand resolving such omitted arguments is crucial to machine translation, information extraction and other NLP tasks, but depends heavilyonsemanticcoherenceandlexicalrelationships. WeproposeaBERT-basedcross-lingualmodelforzeropronounresolution,andevaluateit on the Arabic and Chinese portions of OntoNotes 5.0. As far as we know, ours is the first neural model of zero-pronoun resolutionfor Arabic; and our model also outperforms the state-of-the-art for Chinese. In the paper we also evaluate BERT feature extraction andfine-tune models on the task, and compare them with our model. We also report on an investigation of BERT layers indicating whichlayer encodes the most suitable representation for the task. Our code is available at https://github.com/amaloraini/cross-lingual-Z
SMDDH: Singleton Mention detection using Deep Learning in Hindi Text
Mention detection is an important component of coreference resolution system,
where mentions such as name, nominal, and pronominals are identified. These
mentions can be purely coreferential mentions or singleton mentions
(non-coreferential mentions). Coreferential mentions are those mentions in a
text that refer to the same entities in a real world. Whereas, singleton
mentions are mentioned only once in the text and do not participate in the
coreference as they are not mentioned again in the following text. Filtering of
these singleton mentions can substantially improve the performance of a
coreference resolution process. This paper proposes a singleton mention
detection module based on a fully connected network and a Convolutional neural
network for Hindi text. This model utilizes a few hand-crafted features and
context information, and word embedding for words. The coreference annotated
Hindi dataset comprising of 3.6K sentences, and 78K tokens are used for the
task. In terms of Precision, Recall, and F-measure, the experimental findings
obtained are excellent
Pronoun Prediction with Linguistic Features and Example Weighing
We present a system submitted to the WMT16 shared task in cross-lingual pronoun prediction, in particular, to the English-to-German and German-to-English sub-tasks. The system is based on a linear classifier making use of features both from the target language model and from linguistically analyzed source and target texts. Furthermore, we apply example weighing in classifier learning, which proved to be beneficial for recall in less frequent pronoun classes. Compared to other shared task participants, our best English-to-German system is able to rank just below the top performing submissions
A multi-level methodology for the automated translation of a coreference resolution dataset: an application to the Italian language
In the last decade, the demand for readily accessible corpora has touched all areas of natural language processing, including
coreference resolution. However, it is one of the least considered sub-fields in recent developments. Moreover, almost all
existing resources are only available for the English language. To overcome this lack, this work proposes a methodology to
create a corpus for coreference resolution in Italian exploiting knowledge of annotated resources in other languages.
Starting from OntonNotes, the methodology translates and refines English utterances to obtain utterances respecting Italian
grammar, dealing with language-specific phenomena and preserving coreference and mentions. A quantitative and qualitative
evaluation is performed to assess the well-formedness of generated utterances, considering readability, grammaticality,
and acceptability indexes. The results have confirmed the effectiveness of the methodology in generating a good
dataset for coreference resolution starting from an existing one. The goodness of the dataset is also assessed by training a
coreference resolution model based on BERT language model, achieving the promising results. Even if the methodology
has been tailored for English and Italian languages, it has a general basis easily extendable to other languages, adapting a
small number of language-dependent rules to generalize most of the linguistic phenomena of the language under
examination
Towards Multilingual Coreference Resolution
The current work investigates the problems that occur when coreference resolution is considered as a multilingual task. We assess the issues that arise when a framework using the mention-pair coreference resolution model and memory-based learning for the resolution process are used. Along the way, we revise three essential subtasks of coreference resolution: mention detection, mention head detection and feature selection. For each of these aspects we propose various multilingual solutions including both heuristic, rule-based and machine learning methods. We carry out a detailed analysis that includes eight different languages (Arabic, Catalan, Chinese, Dutch, English, German, Italian and Spanish) for which datasets were provided by the only two multilingual shared tasks on coreference resolution held so far: SemEval-2 and CoNLL-2012. Our investigation shows that, although complex, the coreference resolution task can be targeted in a multilingual and even language independent way. We proposed machine learning methods for each of the subtasks that are affected by the transition, evaluated and compared them to the performance of rule-based and heuristic approaches. Our results confirmed that machine learning provides the needed flexibility for the multilingual task and that the minimal requirement for a language independent system is a part-of-speech annotation layer provided for each of the approached languages. We also showed that the performance of the system can be improved by introducing other layers of linguistic annotations, such as syntactic parses (in the form of either constituency or dependency parses), named entity information, predicate argument structure, etc. Additionally, we discuss the problems occurring in the proposed approaches and suggest possibilities for their improvement
- …