It is generally agreed that one origin of machine bias is resulting from
characteristics within the dataset on which the algorithms are trained, i.e.,
the data does not warrant a generalized inference. We, however, hypothesize
that a different `mechanism', hitherto not articulated in the literature, may
also be responsible for machine's bias, namely that biases may originate from
(i) the programmers' cultural background, such as education or line of work, or
(ii) the contextual programming environment, such as software requirements or
developer tools. Combining an experimental and comparative design, we studied
the effects of cultural metaphors and contextual metaphors, and tested whether
each of these would `transfer' from the programmer to program, thus
constituting a machine bias. The results show (i) that cultural metaphors
influence the programmer's choices and (ii) that `induced' contextual metaphors
can be used to moderate or exacerbate the effects of the cultural metaphors.
This supports our hypothesis that biases in automated systems do not always
originate from within the machine's training data. Instead, machines may also
`replicate' and `reproduce' biases from the programmers' cultural background by
the transfer of cultural metaphors into the programming process. Implications
for academia and professional practice range from the micro programming-level
to the macro national-regulations or educational level, and span across all
societal domains where software-based systems are operating such as the popular
AI-based automated decision support systems.Comment: 40 pages of which 7 pages of Appendix, 26 Figures, 2 Table