5 research outputs found

    SemanticFusion: Joint labeling, tracking and mapping

    No full text
    Kick-started by deployment of the well-known KinectFusion, recent research on the task of RGBD-based dense volume reconstruction has focused on improving different shortcomings of the original algorithm. In this paper we tackle two of them: drift in the camera trajectory caused by the accumulation of small per-frame tracking errors and lack of semantic information within the output of the algorithm. Accordingly, we present an extended KinectFusion pipeline which takes into account per-pixel semantic labels gathered from the input frames. By such clues, we extend the memory structure holding the reconstructed environment so to store per-voxel information on the kinds of object likely to appear in each spatial location. We then take such information into account during the camera localization step to increase the accuracy in the estimated camera trajectory. Thus, we realize a SemanticFusion loop whereby perframe labels help better track the camera and successful tracking enables to consolidate instantaneous semantic observations into a coherent volumetric map

    A Brute Force Approach to Depth Camera Odometry

    No full text
    International audienceBy providing direct access to 3D information of the environment, depth cameras are particularly useful for perception applications such as Simultaneous Localization And Mapping or object recognition. With the introduction of the Kinect in 2010, Microsoft released a low cost depth camera that is now intensively used by researchers, especially in the field of indoor robotics. This chapter introduces a new 3D registration algorithm that can deal with considerable sensor motion. The proposed approach is designed to take advantage of the powerful computational scalability of Graphics Processing Units (GPUs)
    corecore