Kinect Depth Recovery via the Cooperative Profit Random Forest Algorithm

Abstract

The depth map captured by Kinect usually contain missing depth data. In this paper, we propose a novel method to recover the missing depth data with the guidance of depth information of each neighborhood pixel. In the proposed framework, a self-taught mechanism and a cooperative profit random forest (CPRF) algorithm are combined to predict the missing depth data based on the existing depth data and the corresponding RGB image. The proposed method can overcome the defects of the traditional methods which is prone to producing artifact or blur on the edge of objects. The experimental results on the Berkeley 3-D Object Dataset (B3DO) and the Middlebury benchmark dataset show that the proposed method outperforms the existing method for the recovery of the missing depth data. In particular, it has a good effect on maintaining the geometry of objects

    Similar works