Black Boxes and Theory Deserts: Deep Networks and Epistemic Opacity in the Cognitive Sciences

Abstract

Cognitive scientists deal with technology in a very particular way: they use technology to understand perception, action, and cognition. This particular form of human-machine interaction (HMI) is very well illustrated by the use cognitive scientists make of artificial neural networks as models of cognitive systems and, more concretely, of the brain. However, the activity of cognitive scientists in this context suffers from the shortcoming of epistemic opacity: artificial neural networks are too difficult to interpret and understand, so in many cases they remain black boxes for researchers. In this paper, we provide a diagnostic for such epistemic opacity based on dominant cognitive science’s lack of theoretical resources to account for the activity of artificial neural networks when taken as models of the brain. Then, we offer the guidelines of a solution founded on the notion of information developed in ecological psychology

    Similar works