5,761 research outputs found

    Beyond Personalization: Research Directions in Multistakeholder Recommendation

    Get PDF
    Recommender systems are personalized information access applications; they are ubiquitous in today's online environment, and effective at finding items that meet user needs and tastes. As the reach of recommender systems has extended, it has become apparent that the single-minded focus on the user common to academic research has obscured other important aspects of recommendation outcomes. Properties such as fairness, balance, profitability, and reciprocity are not captured by typical metrics for recommender system evaluation. The concept of multistakeholder recommendation has emerged as a unifying framework for describing and understanding recommendation settings where the end user is not the sole focus. This article describes the origins of multistakeholder recommendation, and the landscape of system designs. It provides illustrative examples of current research, as well as outlining open questions and research directions for the field.Comment: 64 page

    Language (Technology) is Power: A Critical Survey of "Bias" in NLP

    Full text link
    We survey 146 papers analyzing "bias" in NLP systems, finding that their motivations are often vague, inconsistent, and lacking in normative reasoning, despite the fact that analyzing "bias" is an inherently normative process. We further find that these papers' proposed quantitative techniques for measuring or mitigating "bias" are poorly matched to their motivations and do not engage with the relevant literature outside of NLP. Based on these findings, we describe the beginnings of a path forward by proposing three recommendations that should guide work analyzing "bias" in NLP systems. These recommendations rest on a greater recognition of the relationships between language and social hierarchies, encouraging researchers and practitioners to articulate their conceptualizations of "bias"---i.e., what kinds of system behaviors are harmful, in what ways, to whom, and why, as well as the normative reasoning underlying these statements---and to center work around the lived experiences of members of communities affected by NLP systems, while interrogating and reimagining the power relations between technologists and such communities

    Characterizing Algorithmic Performance in Machine Learning for Education

    Get PDF
    The integration of artificial intelligence (AI) in educational systems has revolutionized the field of education, offering numerous benefits such as personalized learning, intelligent tutoring, and data-driven insights. However, alongside this progress, concerns have arisen about potential algorithmic disparities and performance issues in AI applications for education. This doctoral thesis addresses these concerns and aims to foster the development of AI in educational contexts that emphasize performance analysis. The thesis begins by investigating the challenges and needs of the educational community in integrating responsible practices into AI-based educational systems. Through surveys and interviews with experts in the field, real-world needs and common areas for developing more responsible AI in education are identified. According to our findings, further research delves into the analysis of student behavior in both synchronous and asynchronous learning environments. By examining patterns of student engagement and predicting student success, the thesis uncovers potential performance issues (e.g., unknown unknowns: the model is really confident of its predictions but actually wrong), emphasizing the need for nuanced approaches that consider hidden factors impacting students’ learning outcomes. By providing an integrated view of the performance analyses conducted in different learning environments, the thesis offers a comprehensive understanding of the challenges and opportunities in developing responsible AI applications for education. Ultimately, this doctoral thesis contributes to the advancement of responsible AI in education, offering insights into the complexities of algorithmic disparities and their implications. The research work presented herein serves as a guiding framework for designing and deploying AI enabled educational systems that prioritize responsibility, and improved learning experiences

    A Transparency Index Framework for AI in Education

    Get PDF
    Numerous AI ethics checklists and frameworks have been proposed focusing on different dimensions of ethical AI such as fairness, explainability, and safety. Yet, no such work has been done on developing transparent AI systems for real-world educational scenarios. This paper presents a Transparency Index framework that has been iteratively co-designed with different stakeholders of AI in education, including educators, ed-tech experts, and AI practitioners. We map the requirements of transparency for different categories of stakeholders of AI in education and demonstrate that transparency considerations are embedded in the entire AI development process from the data collection stage until the AI system is deployed in the real world and iteratively improved. We also demonstrate how transparency enables the implementation of other ethical AI dimensions in Education like interpretability, accountability, and safety. In conclusion, we discuss the directions for future research in this newly emerging field. The main contribution of this study is that it highlights the importance of transparency in developing AI-powered educational technologies and proposes an index framework for its conceptualization for AI in education

    On the Impact of Explanations on Understanding of Algorithmic Decision-Making

    Full text link
    Ethical principles for algorithms are gaining importance as more and more stakeholders are affected by "high-risk" algorithmic decision-making (ADM) systems. Understanding how these systems work enables stakeholders to make informed decisions and to assess the systems' adherence to ethical values. Explanations are a promising way to create understanding, but current explainable artificial intelligence (XAI) research does not always consider theories on how understanding is formed and evaluated. In this work, we aim to contribute to a better understanding of understanding by conducting a qualitative task-based study with 30 participants, including "users" and "affected stakeholders". We use three explanation modalities (textual, dialogue, and interactive) to explain a "high-risk" ADM system to participants and analyse their responses both inductively and deductively, using the "six facets of understanding" framework by Wiggins & McTighe. Our findings indicate that the "six facets" are a fruitful approach to analysing participants' understanding, highlighting processes such as "empathising" and "self-reflecting" as important parts of understanding. We further introduce the "dialogue" modality as a valid alternative to increase participant engagement in ADM explanations. Our analysis further suggests that individuality in understanding affects participants' perceptions of algorithmic fairness, confirming the link between understanding and ADM assessment that previous studies have outlined. We posit that drawing from theories on learning and understanding like the "six facets" and leveraging explanation modalities can guide XAI research to better suit explanations to learning processes of individuals and consequently enable their assessment of ethical values of ADM systems.Comment: 17 pages, 2 figures, 1 table, supplementary material included as PDF, submitted to FAccT 2
    • …
    corecore