8 research outputs found

    Cultural Re-contextualization of Fairness Research in Language Technologies in India

    Full text link
    Recent research has revealed undesirable biases in NLP data and models. However, these efforts largely focus on social disparities in the West, and are not directly portable to other geo-cultural contexts. In this position paper, we outline a holistic research agenda to re-contextualize NLP fairness research for the Indian context, accounting for Indian societal context, bridging technological gaps in capability and resources, and adapting to Indian cultural values. We also summarize findings from an empirical study on various social biases along different axes of disparities relevant to India, demonstrating their prevalence in corpora and models.Comment: Accepted to NeurIPS Workshop on "Cultures in AI/AI in Culture". This is a non-archival short version, to cite please refer to our complete paper: arXiv:2209.1222

    Responsible AI: Concepts, critical perspectives and an Information Systems research agenda

    Get PDF
    Being responsible for Artificial Intelligence (AI) harnessing its power while minimising risks for individuals and society is one of the greatest challenges of our time. A vibrant discourse on Responsible AI is developing across academia, policy making and corporate communications. In this editorial, we demonstrate how the different literature strands intertwine but also diverge and propose a comprehensive definition of Responsible AI as the practice of developing, using and governing AI in a human-centred way to ensure that AI is worthy of being trusted and adheres to fundamental human values. This definition clarifies that Responsible AI is not a specific category of AI artifacts that have special properties or can undertake responsibilities, humans are ultimately responsible for AI, for its consequences and for controlling AI development and use. We explain how the four papers included in this special issue manifest different Responsible AI practices and synthesise their findings into an integrative framework that includes business models, services/products, design processes and data. We suggest that IS Research can contribute socially relevant knowledge about Responsible AI providing insights on how to balance instrumental and humanistic AI outcomes and propose themes for future IS research on Responsible AI

    What is the Minimum to Trust AI?—A Requirement Analysis for (Generative) AI-based Texts

    Get PDF
    The generative Artificial Intelligence (genAI) innovation enables new potentials for end-users, affecting youth and the inexperienced. Nevertheless, as an innovative technology, genAI risks generating misinformation that is not recognizable as such. The extraordinary AI outputs can result in increased trustworthiness. An end-user assessment system is necessary to expose the unfounded reliance on erroneous responses. This paper identifies requirements for an assessment system to prevent end-users from overestimating trust in generated texts. Thus we conducted requirements engineering based on a literature review and two international surveys. The results confirmed the requirements which enable human protection, human support, and content veracity in dealing with genAI. Overestimated trust is rooted in miscalibration; clarity about genAI and its provider is essential to solving this phenomenon, and there is a demand for human verifications. Consequently, our findings provide evidence for the significance of future IS research on human-centered genAI trust solutions

    Exploring the Impact of Lay User Feedback for Improving AI Fairness

    Full text link
    Fairness in AI is a growing concern for high-stakes decision making. Engaging stakeholders, especially lay users, in fair AI development is promising yet overlooked. Recent efforts explore enabling lay users to provide AI fairness-related feedback, but there is still a lack of understanding of how to integrate users' feedback into an AI model and the impacts of doing so. To bridge this gap, we collected feedback from 58 lay users on the fairness of a XGBoost model trained on the Home Credit dataset, and conducted offline experiments to investigate the effects of retraining models on accuracy, and individual and group fairness. Our work contributes baseline results of integrating user fairness feedback in XGBoost, and a dataset and code framework to bootstrap research in engaging stakeholders in AI fairness. Our discussion highlights the challenges of employing user feedback in AI fairness and points the way to a future application area of interactive machine learning

    An overview of research on human-centered design in the development of artificial general intelligence

    Full text link
    Abstract: This article offers a comprehensive analysis of Artificial General Intelligence (AGI) development through a humanistic lens. Utilizing a wide array of academic and industry resources, it dissects the technological and ethical complexities inherent in AGI's evolution. Specifically, the paper underlines the societal and individual implications of AGI and argues for its alignment with human values and interests. Purpose: The study aims to explore the role of human-centered design in AGI's development and governance. Design/Methodology/Approach: Employing content analysis and literature review, the research evaluates major themes and concepts in human-centered design within AGI development. It also scrutinizes relevant academic studies, theories, and best practices. Findings: Human-centered design is imperative for ethical and sustainable AGI, emphasizing human dignity, privacy, and autonomy. Incorporating values like empathy, ethics, and social responsibility can significantly influence AGI's ethical deployment. Talent development is also critical, warranting interdisciplinary initiatives. Research Limitations/Implications: There is a need for additional empirical studies focusing on ethics, social responsibility, and talent cultivation within AGI development. Practical Implications: Implementing human-centered values in AGI development enables ethical and sustainable utilization, thus promoting human dignity, privacy, and autonomy. Moreover, a concerted effort across industry, academia, and research sectors can secure a robust talent pool, essential for AGI's stable advancement. Originality/Value: This paper contributes original research to the field by highlighting the necessity of a human-centered approach in AGI development, and discusses its practical ramifications.Comment: 20 page
    corecore