34 research outputs found
Prendo la Parola in Questo Consesso Mondiale: A Multi-Genre 20th Century Corpus in the Political Domain
In this paper we present a multigenre corpus spanning 50 years of European history. It contains a comprehensive collection of Alcide De Gasperi’s public documents, 2,762 in total, written or transcribed between 1901 and 1954. The corpus comprises different types of texts, including newspaper articles, propaganda documents, official letters and parliamentary speeches. The corpus is freely available and includes several annotation layers, i.e. key-concepts, lemmas, PoS tags, person names and geo-referenced places, representing a high-quality ‘silver’ annotation. We believe that this resource can foster research in historical corpus analysis, stylometry and computational social science, among others
Tint, the Swiss-Army Tool for Natural Language Processing in Italian
In this we paper present the last version of Tint, an opensource, fast and extendable Natural Language Processing suite for Italian based on Stanford CoreNLP. The new release includes a set of text processing components for fine-grained linguistic analysis, from tokenization to relation extraction, including part-of-speech tagging, morphological analysis, lemmatization, multi-word expression recognition, dependency parsing, named-entity recognition, keyword extraction, and much more. Tint is written in Java freely distributed under the GPL license. Although some modules do not perform at a state-of-the-art level, Tint reaches very good accuracy in all modules, and can be easily used out-of-the-box
Rating Prediction in Conversational Task Assistants with Behavioral and Conversational-Flow Features
Predicting the success of Conversational Task Assistants (CTA) can be
critical to understand user behavior and act accordingly. In this paper, we
propose TB-Rater, a Transformer model which combines conversational-flow
features with user behavior features for predicting user ratings in a CTA
scenario. In particular, we use real human-agent conversations and ratings
collected in the Alexa TaskBot challenge, a novel multimodal and multi-turn
conversational context. Our results show the advantages of modeling both the
conversational-flow and behavioral aspects of the conversation in a single
model for offline rating prediction. Additionally, an analysis of the
CTA-specific behavioral features brings insights into this setting and can be
used to bootstrap future systems
FairBelief -- Assessing Harmful Beliefs in Language Models
Language Models (LMs) have been shown to inherit undesired biases that might
hurt minorities and underrepresented groups if such systems were integrated
into real-world applications without careful fairness auditing. This paper
proposes FairBelief, an analytical approach to capture and assess beliefs,
i.e., propositions that an LM may embed with different degrees of confidence
and that covertly influence its predictions. With FairBelief, we leverage
prompting to study the behavior of several state-of-the-art LMs across
different previously neglected axes, such as model scale and likelihood,
assessing predictions on a fairness dataset specifically designed to quantify
LMs' outputs' hurtfulness. Finally, we conclude with an in-depth qualitative
assessment of the beliefs emitted by the models. We apply FairBelief to English
LMs, revealing that, although these architectures enable high performances on
diverse natural language processing tasks, they show hurtful beliefs about
specific genders. Interestingly, training procedure and dataset, model scale,
and architecture induce beliefs of different degrees of hurtfulness