Vision and haptic are two most important modalities in a medical simulation. While
visual cues assist one to see his actions when performing a medical procedure, haptic
cues enable feeling the object being manipulated during the interaction. Despite their
importance in a computer simulation, the combination of both modalities has not been
adequately assessed, especially that in a haptic dominant environment. Thus, resulting
in poor emphasis in resource allocation management in terms of effort spent in
rendering the two modalities for simulators with realistic real-time interactions.
Addressing this problem requires an investigation on whether a single modality
(haptic) or a combination of both visual and haptic could be better for learning skills
in a haptic dominant environment such as in a palpation simulator. However, before
such an investigation could take place one main technical implementation issue in
visio-haptic rendering needs to be addresse