Despite many advances in both computational intelligence and computational neuroscience, it is clear that we have yet to achieve the full potential of nature inspired solutions from studying the human brain. Models of brain function have reached the stage where large-scale models of the brain have become possible, yet these tantalising computational structures cannot yet be applied to real-world problems because they lack the ability to be connected to real-world inputs or outputs. This paper introduces the notion of creating a computational hub that has the potential to link real sensory stimuli to higher cortical models. This is achieved through modelling subcortical structures, such as the superior colliculus, which have desirable computational principles, including rapid, multisensory and discriminative processing. We demonstrate some of these subcortical principles in a system that performs real-time speaker localisation using live video and audio, showing how such models may help us bridge the computational gap
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.