Using explainability to help children understand Gender Bias in AI

Abstract

The final publication is available at ACM via http://dx.doi.org/10.1145/3459990.3460719Machine learning systems have become ubiquitous into our society. This has raised concerns about the potential discrimination that these systems might exert due to unconscious bias present in the data, for example regarding gender and race. Whilst this issue has been proposed as an essential subject to be included in the new AI curricula for schools, research has shown that it is a difficult topic to grasp by students. We propose an educational platform tailored to raise the awareness of gender bias in supervised learning, with the novelty of using Grad-CAM as an explainability technique that enables the classifier to visually explain its own predictions. Our study demonstrates that preadolescents (N=78, age 10-14) significantly improve their understanding of the concept of bias in terms of gender discrimination, increasing their ability to recognize biased predictions when they interact with the interpretable model, highlighting its suitability for educational programs.Peer ReviewedObjectius de Desenvolupament Sostenible::4 - Educació de Qualitat::4.4 - Per a 2030, augmentar substancialment el nombre de joves i persones adultes que tenen les competències necessàries, en particular tècniques i professionals, per a accedir a l’ocupació, el treball digne i l’emprenedoriaObjectius de Desenvolupament Sostenible::4 - Educació de QualitatPostprint (author's final draft

    Similar works