821 research outputs found

    Disrupting Digital Monolingualism: A report on multilingualism in digital theory and practice

    Get PDF
    This report is about the Disrupting Digital Monolingualism virtual workshop in June 2020. The DDM workshop sought to draw together a wide range of stakeholders active in confronting the current language bias in most of the digital platforms, tools, algorithms, methods, and datasets which we use in our study or practice, and to reverse the powerful impact this bias has on geocultural knowledge dynamics in the wider world. The workshop aimed to describe the state of the art across different academic disciplines and professional fields, and foster collaboration across diverse perspectives around four points of focus: Linguistic and geocultural diversity in digital knowledge infrastructures; Working with multilingual methods and data; Transcultural and translingual approaches to digital study; and Artificial intelligence, machine learning and NLP in language worlds. Event website https://languageacts.org/digital-mediations/event/disrupting-digital-monolingualism/ This report forms part of a series of reports produced by the Digital Mediations strand of the Language Acts & Worldmaking project, in this case in collaboration with the translingual strand of the Cross-Language Dynamics project (based at the Institute of Modern Languages Research), both funded by the UK Arts and Humanities Research Council’s Open World Research Initiative. Digital Mediations explores interactions and tensions between digital culture, multilingualism and language fields including the Modern Languages

    Language (Technology) is Power: A Critical Survey of "Bias" in NLP

    Full text link
    We survey 146 papers analyzing "bias" in NLP systems, finding that their motivations are often vague, inconsistent, and lacking in normative reasoning, despite the fact that analyzing "bias" is an inherently normative process. We further find that these papers' proposed quantitative techniques for measuring or mitigating "bias" are poorly matched to their motivations and do not engage with the relevant literature outside of NLP. Based on these findings, we describe the beginnings of a path forward by proposing three recommendations that should guide work analyzing "bias" in NLP systems. These recommendations rest on a greater recognition of the relationships between language and social hierarchies, encouraging researchers and practitioners to articulate their conceptualizations of "bias"---i.e., what kinds of system behaviors are harmful, in what ways, to whom, and why, as well as the normative reasoning underlying these statements---and to center work around the lived experiences of members of communities affected by NLP systems, while interrogating and reimagining the power relations between technologists and such communities

    "I'm" Lost in Translation: Pronoun Missteps in Crowdsourced Data Sets

    Full text link
    As virtual assistants continue to be taken up globally, there is an ever-greater need for these speech-based systems to communicate naturally in a variety of languages. Crowdsourcing initiatives have focused on multilingual translation of big, open data sets for use in natural language processing (NLP). Yet, language translation is often not one-to-one, and biases can trickle in. In this late-breaking work, we focus on the case of pronouns translated between English and Japanese in the crowdsourced Tatoeba database. We found that masculine pronoun biases were present overall, even though plurality in language was accounted for in other ways. Importantly, we detected biases in the translation process that reflect nuanced reactions to the presence of feminine, neutral, and/or non-binary pronouns. We raise the issue of translation bias for pronouns and offer a practical solution to embed plurality in NLP data sets.Comment: 6 page

    A Discussion on Building Practical NLP Leaderboards: The Case of Machine Translation

    Full text link
    Recent advances in AI and ML applications have benefited from rapid progress in NLP research. Leaderboards have emerged as a popular mechanism to track and accelerate progress in NLP through competitive model development. While this has increased interest and participation, the over-reliance on single, and accuracy-based metrics have shifted focus from other important metrics that might be equally pertinent to consider in real-world contexts. In this paper, we offer a preliminary discussion of the risks associated with focusing exclusively on accuracy metrics and draw on recent discussions to highlight prescriptive suggestions on how to develop more practical and effective leaderboards that can better reflect the real-world utility of models.Comment: pre-print: comments and suggestions welcom
    • …
    corecore