3,001 research outputs found

    Designing Women: Essentializing Femininity in AI Linguistics

    Get PDF
    Since the eighties, feminists have considered technology a force capable of subverting sexism because of technology’s ability to produce unbiased logic. Most famously, Donna Haraway’s “A Cyborg Manifesto” posits that the cyborg has the inherent capability to transcend gender because of its removal from social construct and lack of loyalty to the natural world. But while humanoids and artificial intelligence have been imagined as inherently subversive to gender, current artificial intelligence perpetuates gender divides in labor and language as their programmers imbue them with traits considered “feminine.” A majority of 21st century AI and humanoids are programmed to fit female stereotypes as they fulfill emotional labor and perform pink-collar tasks, whether through roles as therapists, query-fillers, or companions. This paper examines four specific chat-based AI --ELIZA, XiaoIce, Sophia, and Erica-- and examines how their feminine linguistic patterns are used to maintain the illusion of emotional understanding in regards to the tasks that they perform. Overall, chat-based AI fails to subvert gender roles, as feminine AI are relegated to the realm of emotional intelligence and labor

    Is it ethical to avoid error analysis?

    Full text link
    Machine learning algorithms tend to create more accurate models with the availability of large datasets. In some cases, highly accurate models can hide the presence of bias in the data. There are several studies published that tackle the development of discriminatory-aware machine learning algorithms. We center on the further evaluation of machine learning models by doing error analysis, to understand under what conditions the model is not working as expected. We focus on the ethical implications of avoiding error analysis, from a falsification of results and discrimination perspective. Finally, we show different ways to approach error analysis in non-interpretable machine learning algorithms such as deep learning.Comment: Presented as a poster at the 2017 Workshop on Fairness, Accountability, and Transparency in Machine Learning (FAT/ML 2017

    Bias and Fairness in Chatbots: An Overview

    Full text link
    Chatbots have been studied for more than half a century. With the rapid development of natural language processing (NLP) technologies in recent years, chatbots using large language models (LLMs) have received much attention nowadays. Compared with traditional ones, modern chatbots are more powerful and have been used in real-world applications. There are however, bias and fairness concerns in modern chatbot design. Due to the huge amounts of training data, extremely large model sizes, and lack of interpretability, bias mitigation and fairness preservation of modern chatbots are challenging. Thus, a comprehensive overview on bias and fairness in chatbot systems is given in this paper. The history of chatbots and their categories are first reviewed. Then, bias sources and potential harms in applications are analyzed. Considerations in designing fair and unbiased chatbot systems are examined. Finally, future research directions are discussed

    A Picture is Worth a Thousand Words – Exploring Bias in Inclusive Chatbot Design

    Get PDF
    This study examines the impact of different avatar pictures (gender & disability representation) and gendering on students\u27 perceptions of chatbots in an interaction on learning strategies with 180 students from a German university. In the first experiment, we manipulated the chatbot’s humanoid profile picture based on gender and the representation of a visible handicap (wheelchair). In the second experiment, we varied its language style. Statistical analysis revealed that displaying a physical disability significantly enhanced trust, credibility, and empathy but reduced perceived competence and dominance. Gender-sensitive language improved perceptions of competence, trust, credibility, and empathy, whereas we did not find significant interaction effects between both factors. Our results imply the necessity of a more inclusive design of information systems and highlight designers\u27 responsibility in raising awareness and mitigating unconscious bias, as digital learning (technologies) continue to advance

    Ethical Challenges in Data-Driven Dialogue Systems

    Full text link
    The use of dialogue systems as a medium for human-machine interaction is an increasingly prevalent paradigm. A growing number of dialogue systems use conversation strategies that are learned from large datasets. There are well documented instances where interactions with these system have resulted in biased or even offensive conversations due to the data-driven training process. Here, we highlight potential ethical issues that arise in dialogue systems research, including: implicit biases in data-driven systems, the rise of adversarial examples, potential sources of privacy violations, safety concerns, special considerations for reinforcement learning systems, and reproducibility concerns. We also suggest areas stemming from these issues that deserve further investigation. Through this initial survey, we hope to spur research leading to robust, safe, and ethically sound dialogue systems.Comment: In Submission to the AAAI/ACM conference on Artificial Intelligence, Ethics, and Societ

    Exploring the Impact of Inclusive PCA Design on Perceived Competence, Trust and Diversity

    Get PDF
    Pedagogical Conversational Agents (PCAs) conquer academia as learning facilitators. Due to user heterogeneity and need for more inclusion in education, inclusive PCA design becomes relevant, but still remains understudied. Our contribution thus investigates the effects of inclusive PCA design on competence, trust, and diversity awareness in a between-subjects experiment with two contrastingly designed prototypes (inclusive and non-inclusive PCA) tested among 106 German university students. As expected by social desirability, the results show that 81.5% of the probands consider an inclusive design important. However, at the same time, the inclusive chatbot is highly significantly rated as less competent. In contrast, we did not measure a significant effect regarding trust, but a highly significant, strongly positive effect on diversity awareness. We interpret these results with the help of the qualitative information provided by the respondents and discuss arising implications for inclusive HCI design

    Is it COVID or a Cold? An Investigation of the Role of Social Presence, Trust, and Persuasiveness for Users\u27 Intention to Comply with COVID-19 Chatbots

    Get PDF
    The COVID-19 pandemic challenged the existing healthcare system by demanding potential patients to self-diagnose and self-test a potential virus contraction. In this process, some individuals need help and guidance. However, the previous modus-operandi to go to a physician is no longer viable because of the limited capacity and danger of spreading the virus. Hence, digital means had to be developed to help and inform individuals at home, such as conversational agents (CA). The human-like design and perceived social presence of such a CA are central to attaining users’ compliance. Against this background, we surveyed 174 users of a commercial COVID-19 chatbot to investigate the role of perceived social presence. Our results provide support that the perceived social presence of chatbots leads to higher levels of trust, which are a driver of compliance. In contrast, perceived persuasiveness seems to have no significant effect
    • …
    corecore