Computational models of fear conditioning help us understand the sensory pathways and neural structures underlying fear elicitation in the brain. The majority of the existing models have focused on conditioning on auditory stimuli by simulating the processing of the amygdala, which is the main brain structure implicated in processing fearful stimuli. However, there is now a growing understanding of how fear is elicited from visual stimuli, but as yet we do not have sufficiently capable techniques that can be used to model visual fear conditioning. Masking experiments are a key psychophysics technique that can help us understand these pathways by observing the behavior of the amygdala when presented with visual input that is not consciously perceived (masked). The amygdala's response is indicative of whether it is influenced more by the proposed sub-cortical pathway, than by the cortical pathway. In this paper, we present a computational platform for visual fear conditioning. We use the platform to model the visual pathways leading to the amygdala and with them simulate masking experiments to explore the hypothesis that a sub-cortical pathway exists. The platform uses a modularized Hebbian learning architecture that can organize inputs topographically and condition on multiple stimuli representing visual inputs. We evaluate the properties and behavior of the platform and its capability in simulating masking experiments by comparing our simulation results with those observed for human behavior. Our results provide computational evidence for the influence the sub-cortical pathway has on the amygdala
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.