High-fidelity rendering is computationally demanding and has only recently become achievable at interactive frame rates on high-performance desktop PCs. Research on visual perception has demonstrated that parts of the scene that are not in the focus of viewer's attention may be rendered at much lower quality without this quality difference being perceived. It has also been shown that cross-modal interaction between visual and auditory stimuli can have a significant influence on perception. This paper investigates the limitations of the human visual System and the impact cross-modal interactions has on perceivable rendering thresholds. We show that by exploiting cross-modal interaction, significant savings in rendering quality and hence computational requirements can be achieved, while maintaining the same overall perceptual high quality of the resultant image
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.