unknown

Handling Artifacts in Dynamic Depth Sequences

Abstract

Image sequences of dynamic scenes recorded using various depth imaging devices and handling the artifacts arising within are the main scope of this work. First, a framework for range flow estimation from Microsoft’s multi-modal imaging device Kinect is presented. All essential stages of the flow computation pipeline, starting from camera calibration, followed by the alignment of the range and color channels and finally the introduction of a novel multi-modal range flow algorithm which is robust against typical (technology dependent) range estimation artifacts are discussed. Second, regarding Time-of-Flight data, motion artifacts arise in recordings of dynamic scenes, caused by the sequential nature of the raw image acquisition process. While many methods for compensation of such errors have been proposed so far, there is still a lack of proper comparison. This gap is bridged here by not only evaluating all proposed methods, but also by providing additional insight in the technical properties and depth correction of the recorded data as base-line for future research. Exchanging the tap calibration model necessary for these methods by a model closer to reality improves the results of all related methods without any loss of performance

    Similar works