704 research outputs found
Towards Harmful Erotic Content Detection through Coreference-Driven Contextual Analysis
Adult content detection still poses a great challenge for automation.
Existing classifiers primarily focus on distinguishing between erotic and
non-erotic texts. However, they often need more nuance in assessing the
potential harm. Unfortunately, the content of this nature falls beyond the
reach of generative models due to its potentially harmful nature. Ethical
restrictions prohibit large language models (LLMs) from analyzing and
classifying harmful erotics, let alone generating them to create synthetic
datasets for other neural models. In such instances where data is scarce and
challenging, a thorough analysis of the structure of such texts rather than a
large model may offer a viable solution. Especially given that harmful erotic
narratives, despite appearing similar to harmless ones, usually reveal their
harmful nature first through contextual information hidden in the non-sexual
parts of the narrative.
This paper introduces a hybrid neural and rule-based context-aware system
that leverages coreference resolution to identify harmful contextual cues in
erotic content. Collaborating with professional moderators, we compiled a
dataset and developed a classifier capable of distinguishing harmful from
non-harmful erotic content. Our hybrid model, tested on Polish text,
demonstrates a promising accuracy of 84% and a recall of 80%. Models based on
RoBERTa and Longformer without explicit usage of coreference chains achieved
significantly weaker results, underscoring the importance of coreference
resolution in detecting such nuanced content as harmful erotics. This approach
also offers the potential for enhanced visual explainability, supporting
moderators in evaluating predictions and taking necessary actions to address
harmful content.Comment: Accepted for 6th Workshop on Computational Models of Reference,
Anaphora and Coreference at EMNLP 2023 Conferenc
Findings of the Shared Task on Multilingual Coreference Resolution
This paper presents an overview of the shared task on multilingual
coreference resolution associated with the CRAC 2022 workshop. Shared task
participants were supposed to develop trainable systems capable of identifying
mentions and clustering them according to identity coreference. The public
edition of CorefUD 1.0, which contains 13 datasets for 10 languages, was used
as the source of training and evaluation data. The CoNLL score used in previous
coreference-oriented shared tasks was used as the main evaluation metric. There
were 8 coreference prediction systems submitted by 5 participating teams; in
addition, there was a competitive Transformer-based baseline system provided by
the organizers at the beginning of the shared task. The winner system
outperformed the baseline by 12 percentage points (in terms of the CoNLL scores
averaged across all datasets for individual languages)
ANCOR_Centre, a Large Free Spoken French Coreference Corpus: description of the Resource and Reliability Measures
International audienceThis article presents ANCOR_Centre, a French coreference corpus, available under the Creative Commons Licence. With a size of around 500,000 words, the corpus is large enough to serve the needs of data-driven approaches in NLP and represents one of the largest coreference resources currently available. The corpus focuses exclusively on spoken language, it aims at representing a certain variety of spoken genders. ANCOR_Centre includes anaphora as well as coreference relations which involve nominal and pronominal mentions. The paper describes into details the annotation scheme and the reliability measures computed on the resource
Reflexive constructions in the world's languages
Synopsis:
This landmark publication brings together 28 papers on reflexive constructions in languages from all continents, representing very diverse language types. While reflexive constructions have been discussed in the past from a variety of angles, this is the first edited volume of its kind. All the chapters are based on original data, and they are broadly comparable through a common terminological framework. The volume opens with two introductory chapters by the editors that set the stage and lay out the main comparative concepts, and it concludes with a chapter presenting generalizations on the basis of the studies of individual languages
Coreference Resolution for French Oral Data: Machine Learning Experiments with ANCOR
International audienceWe present CROC (Coreference Resolution for Oral Corpus), the first machine learning system for coreference resolution in French. One specific aspect of the system is that it has been trained on data that come exclusively from transcribed speech, namely ANCOR (ANaphora and Coreference in ORal corpus), the first large-scale French corpus with anaphorical relation annotations. In its current state, the CROC system requires pre-annotated mentions. We detail the features used for the learning algorithms, and we present a set of experiments with these features. The scores we obtain are close to those of state-of-the-art systems for written English
Neural Coreference Resolution for Turkish
Coreference resolution deals with resolving mentions of the same underlying entity in a given text. This challenging task is an indispensable aspect of text understanding and has important applications in various language processing systems such as question answering and machine translation. Although a significant amount of studies is devoted to coreference resolution, the research on Turkish is scarce and mostly limited to pronoun resolution. To our best knowledge, this article presents the first neural Turkish coreference resolution study where two learning-based models are explored. Both models follow the mention-ranking approach while forming clusters of mentions. The first model uses a set of hand-crafted features whereas the second coreference model relies on embeddings learned from large-scale pre-trained language models for capturing similarities between a mention and its candidate antecedents. Several language models trained specifically for Turkish are used to obtain mention representations and their effectiveness is compared in conducted experiments using automatic metrics. We argue that the results of this study shed light on the possible contributions of neural architectures to Turkish coreference resolution.119683
- …