In contrast to the wealth of saliency models in the vision literature, there is a relative paucity of models exploring auditory saliency. In this work, we integrate the approaches of (Kayser, Petkov, Lippert, & Logothetis, 2005) and (Zhang, Tong, Marks, Shan, & Cottrell, 2008) and propose a model of auditory saliency. The model combines the statistics of natural soundscapes and the recent past of the input signal to predict the saliency of an auditory stimulus in the frequency domain. To evaluate the model output, a simple behavioral experiment was performed. Results show the auditory saliency maps calculated by the model to be in excellent accord with human judgments of saliency
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.