131 research outputs found

    Fusing spatial and temporal components for real-time depth data enhancement of dynamic scenes

    Get PDF
    The depth images from consumer depth cameras (e.g., structured-light/ToF devices) exhibit a substantial amount of artifacts (e.g., holes, flickering, ghosting) that needs to be removed for real-world applications. Existing methods cannot entirely remove them and perform slow. This thesis proposes a new real-time spatio-temporal depth image enhancement filter that completely removes flickering and ghosting, and significantly reduces holes. This thesis also presents a novel depth-data capture setup and two data reduction methods to optimize the performance of the proposed enhancement method

    SPA: Sparse Photorealistic Animation using a single RGB-D camera

    Get PDF
    Photorealistic animation is a desirable technique for computer games and movie production. We propose a new method to synthesize plausible videos of human actors with new motions using a single cheap RGB-D camera. A small database is captured in a usual office environment, which happens only once for synthesizing different motions. We propose a markerless performance capture method using sparse deformation to obtain the geometry and pose of the actor for each time instance in the database. Then, we synthesize an animation video of the actor performing the new motion that is defined by the user. An adaptive model-guided texture synthesis method based on weighted low-rank matrix completion is proposed to be less sensitive to noise and outliers, which enables us to easily create photorealistic animation videos with new motions that are different from the motions in the database. Experimental results on the public dataset and our captured dataset have verified the effectiveness of the proposed method

    An Improved Depth Image Inpainting

    Get PDF
    [[abstract]]In recent years, the price of depth camera became low, so that researchers can use depth camera to do more application. For computer vision, depth images can provide more useful information. However, generally there are some problems in depth image, such as holes, incomplete edge, and temporal random fluctuations. Conventional inpainting approach must rely on color image and it cannot be processed in real time. Therefore, this paper proposes a real time depth image inpainting method. First, we use background subtraction and mask filter to patch up the no-measured pixels, and then using the relationship between successive depth images to remove temporal random fluctuations. Finally, using erosion and dilation smooth the edge. Experimental results outperform than traditional one.[[sponsorship]]Asia-Pacific Education & Research Association[[conferencetype]]國際[[conferencedate]]20140711~20140713[[booktype]]紙本[[iscallforpapers]]Y[[conferencelocation]]普吉島, 泰

    Extended patch prioritization for depth filling within constrained exemplar-based RGB-D image completion.

    Get PDF
    We address the problem of hole filling in depth images, obtained from either active or stereo sensing, for the purposes of depth image completion in an exemplar-based framework. Most existing exemplar-based inpainting techniques, designed for color image completion, do not perform well on depth information with object boundaries obstructed or surrounded by missing regions. In the proposed method, using both color (RGB) and depth (D) information available from a common-place RGB-D image, we explicitly modify the patch prioritization term utilized for target patch ordering to facilitate improved propagation of complex texture and linear structures within depth completion. Furthermore, the query space in the source region is constrained to increase the efficiency of the approach compared to other exemplar-driven methods. Evaluations demonstrate the efficacy of the proposed method compared to other contemporary completion techniques

    Generalized Video Deblurring for Dynamic Scenes

    Full text link
    Several state-of-the-art video deblurring methods are based on a strong assumption that the captured scenes are static. These methods fail to deblur blurry videos in dynamic scenes. We propose a video deblurring method to deal with general blurs inherent in dynamic scenes, contrary to other methods. To handle locally varying and general blurs caused by various sources, such as camera shake, moving objects, and depth variation in a scene, we approximate pixel-wise kernel with bidirectional optical flows. Therefore, we propose a single energy model that simultaneously estimates optical flows and latent frames to solve our deblurring problem. We also provide a framework and efficient solvers to optimize the energy model. By minimizing the proposed energy function, we achieve significant improvements in removing blurs and estimating accurate optical flows in blurry frames. Extensive experimental results demonstrate the superiority of the proposed method in real and challenging videos that state-of-the-art methods fail in either deblurring or optical flow estimation.Comment: CVPR 2015 ora
    corecore