L.: Speaker-dependent bimodal integration of Chinese phonemes and letters using multimodal self-organizing networks


Abstract—We present a model of integration of auditory and visual information as in the human cortex. More specifically, we demonstrate a possible way in which the phonetic symbols and associated Mandarin Chinese phonemes pronounced by different speakers are mapped onto the model of cortical areas. Our model has been strongly influenced by recent fMRI studies on integration of letters and speech sounds in the human brain. The model is based on multimodal self-organizing networks (MuSoNs) which were introduced in our previous works and proved to be a convenient tool to describe and study mapping and integration of sensory information as in the cortex. The model also shows how phonemes pronounced by different speakers are mapped onto overlapping cortical areas. I

Similar works

Full text

oaioai:CiteSeerX.psu:10.1...Last time updated on 10/29/2017

This paper was published in CiteSeerX.

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.