8 research outputs found
Cultural Re-contextualization of Fairness Research in Language Technologies in India
Recent research has revealed undesirable biases in NLP data and models.
However, these efforts largely focus on social disparities in the West, and are
not directly portable to other geo-cultural contexts. In this position paper,
we outline a holistic research agenda to re-contextualize NLP fairness research
for the Indian context, accounting for Indian societal context, bridging
technological gaps in capability and resources, and adapting to Indian cultural
values. We also summarize findings from an empirical study on various social
biases along different axes of disparities relevant to India, demonstrating
their prevalence in corpora and models.Comment: Accepted to NeurIPS Workshop on "Cultures in AI/AI in Culture". This
is a non-archival short version, to cite please refer to our complete paper:
arXiv:2209.1222
Responsible AI: Concepts, critical perspectives and an Information Systems research agenda
Being responsible for Artificial Intelligence (AI) harnessing its power while minimising risks for individuals and society is one of the greatest challenges of our time. A vibrant discourse on Responsible AI is developing across academia, policy making and corporate communications. In this editorial, we demonstrate how the different literature strands intertwine but also diverge and propose a comprehensive definition of Responsible AI as the practice of developing, using and governing AI in a human-centred way to ensure that AI is worthy of being trusted and adheres to fundamental human values. This definition clarifies that Responsible AI is not a specific category of AI artifacts that have special properties or can undertake responsibilities, humans are ultimately responsible for AI, for its consequences and for controlling AI development and use. We explain how the four papers included in this special issue manifest different Responsible AI practices and synthesise their findings into an integrative framework that includes business models, services/products, design processes and data. We suggest that IS Research can contribute socially relevant knowledge about Responsible AI providing insights on how to balance instrumental and humanistic AI outcomes and propose themes for future IS research on Responsible AI
What is the Minimum to Trust AI?—A Requirement Analysis for (Generative) AI-based Texts
The generative Artificial Intelligence (genAI) innovation enables new potentials for end-users, affecting youth and the inexperienced. Nevertheless, as an innovative technology, genAI risks generating misinformation that is not recognizable as such. The extraordinary AI outputs can result in increased trustworthiness. An end-user assessment system is necessary to expose the unfounded reliance on erroneous responses. This paper identifies requirements for an assessment system to prevent end-users from overestimating trust in generated texts. Thus we conducted requirements engineering based on a literature review and two international surveys. The results confirmed the requirements which enable human protection, human support, and content veracity in dealing with genAI. Overestimated trust is rooted in miscalibration; clarity about genAI and its provider is essential to solving this phenomenon, and there is a demand for human verifications. Consequently, our findings provide evidence for the significance of future IS research on human-centered genAI trust solutions
Exploring the Impact of Lay User Feedback for Improving AI Fairness
Fairness in AI is a growing concern for high-stakes decision making. Engaging
stakeholders, especially lay users, in fair AI development is promising yet
overlooked. Recent efforts explore enabling lay users to provide AI
fairness-related feedback, but there is still a lack of understanding of how to
integrate users' feedback into an AI model and the impacts of doing so. To
bridge this gap, we collected feedback from 58 lay users on the fairness of a
XGBoost model trained on the Home Credit dataset, and conducted offline
experiments to investigate the effects of retraining models on accuracy, and
individual and group fairness. Our work contributes baseline results of
integrating user fairness feedback in XGBoost, and a dataset and code framework
to bootstrap research in engaging stakeholders in AI fairness. Our discussion
highlights the challenges of employing user feedback in AI fairness and points
the way to a future application area of interactive machine learning
An overview of research on human-centered design in the development of artificial general intelligence
Abstract: This article offers a comprehensive analysis of Artificial General
Intelligence (AGI) development through a humanistic lens. Utilizing a wide
array of academic and industry resources, it dissects the technological and
ethical complexities inherent in AGI's evolution. Specifically, the paper
underlines the societal and individual implications of AGI and argues for its
alignment with human values and interests.
Purpose: The study aims to explore the role of human-centered design in AGI's
development and governance.
Design/Methodology/Approach: Employing content analysis and literature
review, the research evaluates major themes and concepts in human-centered
design within AGI development. It also scrutinizes relevant academic studies,
theories, and best practices.
Findings: Human-centered design is imperative for ethical and sustainable
AGI, emphasizing human dignity, privacy, and autonomy. Incorporating values
like empathy, ethics, and social responsibility can significantly influence
AGI's ethical deployment. Talent development is also critical, warranting
interdisciplinary initiatives.
Research Limitations/Implications: There is a need for additional empirical
studies focusing on ethics, social responsibility, and talent cultivation
within AGI development.
Practical Implications: Implementing human-centered values in AGI development
enables ethical and sustainable utilization, thus promoting human dignity,
privacy, and autonomy. Moreover, a concerted effort across industry, academia,
and research sectors can secure a robust talent pool, essential for AGI's
stable advancement.
Originality/Value: This paper contributes original research to the field by
highlighting the necessity of a human-centered approach in AGI development, and
discusses its practical ramifications.Comment: 20 page
Recommended from our members
Making Data Work Count
In this paper, we examine the work of data annotation. Specifically, we focus on the role of counting or quantification in organising annotation work. Based on an ethnographic study of data annotation in two outsourcing centres in India, we observe that counting practices and its associated logics are an integral part of day-to-day annotation activities. In particular, we call attention to the presumption of total countability observed in annotation - the notion that everything, from tasks, datasets and deliverables, to workers, work time, quality and performance, can be managed by applying the logics of counting. To examine this, we draw on sociological and socio-technical scholarship on quantification and develop the lens of a 'regime of counting' that makes explicit the specific counts, practices, actors and structures that underpin the pervasive counting in annotation. We find that within the AI supply chain and data work, counting regimes aid the assertion of authority by the AI clients (also called requesters) over annotation processes, constituting them as reductive, standardised, and homogenous. We illustrate how this has implications for i) how annotation work and workers get valued, ii) the role human discretion plays in annotation, and iii) broader efforts to introduce accountable and more just practices in AI. Through these implications, we illustrate the limits of operating within the logic of total countability. Instead, we argue for a view of counting as partial - located in distinct geographies, shaped by specific interests and accountable in only limited ways. This, we propose, sets the stage for a fundamentally different orientation to counting and what counts in data annotation