741 research outputs found

    Evaluating the Underlying Gender Bias in Contextualized Word Embeddings

    Get PDF
    Gender bias is highly impacting natural language processing applications. Word embeddings have clearly been proven both to keep and amplify gender biases that are present in current data sources. Recently, contextualized word embeddings have enhanced previous word embedding techniques by computing word vector representations dependent on the sentence they appear in. In this paper, we study the impact of this conceptual change in the word embedding computation in relation with gender bias. Our analysis includes different measures previously applied in the literature to standard word embeddings. Our findings suggest that contextualized word embeddings are less biased than standard ones even when the latter are debiased

    Evaluating the underlying gender bias in contextualized word embeddings

    Get PDF
    Gender bias is highly impacting natural language processing applications. Word embeddings have clearly been proven both to keep and amplify gender biases that are present in current data sources. Recently, contextualized word embeddings have enhanced previous word embedding techniques by computing word vector representations dependent on the sentence they appear in. In this paper, we study the impact of this conceptual change in the word embedding computation in relation with gender bias. Our analysis includes different measures previously applied in the literature to standard word embeddings. Our findings suggest that contextualized word embeddings are less biased than standard ones even when the latter are debiased.We want to thank Hila Gonen for her support dur-ing our research.This work is supported in part by the Catalan Agency for Management of University andResearch Grants (AGAUR) through the FI PhDScholarship and the Industrial PhD Grant. Thiswork is also supported in part by the Span-ish Ministerio de Economa y Competitividad, the European Regional Development Fund andthe Agencia Estatal de Investigacin, through thepostdoctoral senior grant Ramn y Cajal, contract TEC2015-69266-P (MINECO/FEDER,EU)and contract PCIN-2017-079 (AEI/MINECO).Peer ReviewedPostprint (published version

    Language (Technology) is Power: A Critical Survey of "Bias" in NLP

    Full text link
    We survey 146 papers analyzing "bias" in NLP systems, finding that their motivations are often vague, inconsistent, and lacking in normative reasoning, despite the fact that analyzing "bias" is an inherently normative process. We further find that these papers' proposed quantitative techniques for measuring or mitigating "bias" are poorly matched to their motivations and do not engage with the relevant literature outside of NLP. Based on these findings, we describe the beginnings of a path forward by proposing three recommendations that should guide work analyzing "bias" in NLP systems. These recommendations rest on a greater recognition of the relationships between language and social hierarchies, encouraging researchers and practitioners to articulate their conceptualizations of "bias"---i.e., what kinds of system behaviors are harmful, in what ways, to whom, and why, as well as the normative reasoning underlying these statements---and to center work around the lived experiences of members of communities affected by NLP systems, while interrogating and reimagining the power relations between technologists and such communities

    Gender bias in natural language processing

    Get PDF
    (English) Gender bias is a dangerous form of social bias impacting an essential group of people. The effect of gender bias is propagated to our data, causing the accuracy of the predictions in models to be different depending on gender. In the deep learning era, our models are highly impacted by the training data transferring the negative biases in the data to the models. Natural Language Processing models encounter this amplification of bias in the data. Our thesis is devoted to studying the issue of gender bias in NLP applications from different points of view. To understand and manage the effect of bias amplification, evaluation and mitigation approaches have to be explored. The scientific society has exerted significant efforts in these two directions to enable proposing solutions to the problem. Our thesis is devoted to these two main directions; proposing evaluation schemes, whether as datasets or mechanisms, besides suggesting mitigation techniques. For evaluation, we proposed techniques for evaluating bias in contextualized embeddings and multilingual translation models. Besides, we presented benchmarks for evaluating bias for speech translation and multilingual machine translation models. For mitigation direction, we proposed different approaches in machine translation models by adding contextual text, contextual embeddings, or relaxing the architecture’s constraints. Our evaluation studies concluded that gender bias is encoded strongly in contextual embeddings representing professions and stereotypical nouns. We also unveiled that algorithms amplify the bias and that the system’s architecture impacts the behavior. For the evaluation purposes, we contributed to creating several benchmarks for the evaluation purpose; we introduced a benchmark that evaluates gender bias in speech translation systems. This research suggests that the current state of speech translation systems does not enable us to evaluate gender bias accurately because of the low quality of speech translation systems. Additionally, we proposed a toolkit for building multilingual balanced datasets for training and evaluating NMT models. These datasets are balanced within the gender occupation-wise. We found out that high-resource languages usually tend to predict more precise male translations. Our mitigation studies in NMT suggest that the nature of datasets and languages needs to be considered to apply the right approach. Mitigating bias can rely on adding contextual information. However, in other cases, we need to rethink the model and relax some influencing conditions to the bias that do not affect the general performance but reduce the effect of bias amplification.(Español) El prejuicio de género es una forma peligrosa de sesgo social que afecta a un grupo esencial de personas. El efecto del prejuicio de género se propaga a nuestros datos, lo que hace quela precisión de las predicciones en los modelos sea diferente según el género. En la era del aprendizaje profundo, nuestros modelos se ven afectados por los datos de entrenamiento que transfieren los prejuicios de los datos a los modelos. Los modelos de procesamiento del lenguaje natural pueden además amplificar este sesgo en los datos. Para comprender el efecto de la amplificación del prejuicio de género, se deben explorar enfoques de evaluación y mitigación. La sociedad científica ha visto la importancía de estas dos direcciones para posibilitar la propuesta de soluciones al problema. Nuestra tesis está dedicada a estas dos direcciones principales; proponiendo esquemas de evaluación, ya sea como conjuntos de datos y mecanismos de evaluación, además de sugerir técnicas de mitigación. Para la evaluación, propusimos técnicas para evaluar el prejuicio en representaciones vectoriales contextualizadas y modelos de traducción multilingüe. Además, presentamos puntos de referencia para evaluar el prejuicio de la traducción de voz y los modelos de traducción automática multilingüe. Para la dirección de mitigación, propusimos diferentes enfoques en los modelos de traducción automática agregando texto contextual, incrustaciones contextuales o relajando las restricciones de la arquitectura. Nuestros estudios de evaluación concluyeron que el prejuicio de género está fuertemente codificado en representaciones vectoriales contextuales que representan profesiones y sustantivos estereotipados. También revelamos que los algoritmos amplifican el sesgo y que la arquitectura del sistema afecta el comportamiento. Para efectos de evaluación, contribuimos a la creación de varios datos de referencia para fines de evaluación; presentamos un conjunto de datos que evalúa el sesgo de género en los sistemas de traducción de voz. Esta investigación sugiere que el estado actual de los sistemas de traducción del habla no nos permite evaluar con precisión el sesgo de género debido a la baja calidad de los sistemas de traducción del habla. Además, propusimos un conjunto de herramientas para construir conjuntos de datos equilibrados multilingües para entrenar y evaluar modelos NMT. Estos conjuntos de datos están equilibrados dentro de la ocupación de género. Descubrimos que los idiomas con muchos recursos generalmente tienden a predecir traducciones masculinas más precisas. Nuestros estudios de mitigación en NMT sugieren que se debe considerar la naturaleza de los conjuntos de datos y los idiomas para aplicar el enfoque correcto. La mitigación del sesgo puede basarse en agregar información contextual. Sin embargo, en otros casos, necesitamos repensar el modelo y relajar algunas condiciones que influyen en el sesgo que no afectan el rendimiento general pero reducen el efecto de la amplificación del sesgo.Postprint (published version

    No Word Embedding Model Is Perfect: Evaluating the Representation Accuracy for Social Bias in the Media

    Full text link
    News articles both shape and reflect public opinion across the political spectrum. Analyzing them for social bias can thus provide valuable insights, such as prevailing stereotypes in society and the media, which are often adopted by NLP models trained on respective data. Recent work has relied on word embedding bias measures, such as WEAT. However, several representation issues of embeddings can harm the measures' accuracy, including low-resource settings and token frequency differences. In this work, we study what kind of embedding algorithm serves best to accurately measure types of social bias known to exist in US online news articles. To cover the whole spectrum of political bias in the US, we collect 500k articles and review psychology literature with respect to expected social bias. We then quantify social bias using WEAT along with embedding algorithms that account for the aforementioned issues. We compare how models trained with the algorithms on news articles represent the expected social bias. Our results suggest that the standard way to quantify bias does not align well with knowledge from psychology. While the proposed algorithms reduce the~gap, they still do not fully match the literature.Comment: Accepted to Findings of the Association for Computational Linguistics: EMNLP 202
    corecore