Stereoscopic, head-tracked display systems can show users realistic,
world-locked virtual objects and environments. However, discrepancies between
the rendering pipeline and physical viewing conditions can lead to perceived
instability in the rendered content resulting in reduced realism, immersion,
and, potentially, visually-induced motion sickness. The requirements to achieve
perceptually stable world-locked rendering are unknown due to the challenge of
constructing a wide field of view, distortion-free display with highly accurate
head- and eye-tracking. In this work we introduce new hardware and software
built upon recently introduced hardware and present a system capable of
rendering virtual objects over real-world references without perceivable drift
under such constraints. The platform is used to study acceptable errors in
render camera position for world-locked rendering in augmented and virtual
reality scenarios, where we find an order of magnitude difference in perceptual
sensitivity between them. We conclude by comparing study results with an
analytic model which examines changes to apparent depth and visual heading in
response to camera displacement errors. We identify visual heading as an
important consideration for world-locked rendering alongside depth errors from
incorrect disparity