Also published online by CEUR Workshop Proceedings (CEUR-WS.org, ISSN 1613-0073)This paper is the first to explore an automatic way to detect
bias in deep convolutional neural networks by simply look-
ing at their weights. Furthermore, it is also a step towards
understanding neural networks and how they work. We show
that it is indeed possible to know if a model is biased or not
simply by looking at its weights, without the model infer-
ence for an specific input. We analyze how bias is encoded
in the weights of deep networks through a toy example using
the Colored MNIST database and we also provide a realistic
case study in gender detection from face images using state-
of-the-art methods and experimental resources. To do so, we
generated two databases with 36K and 48K biased models
each. In the MNIST models we were able to detect whether
they presented a strong or low bias with more than 99% ac-
curacy, and we were also able to classify between four levels
of bias with more than 70% accuracy. For the face models,
we achieved 90% accuracy in distinguishing between models
biased towards Asian, Black, or Caucasian ethnicityThis work has been supported by projects: TRESPASSETN (MSCA-ITN-2019-860813), PRIMA (MSCAITN-2019-860315), BIBECA (RTI2018-101248-B-I00 MINECO/FEDER), and BB for TAI (PID2021-127641OB-I00 MICINN/FEDER). I. Serna is supported by a FPI fellowship from UAM