9,168 research outputs found

    A Deep Primal-Dual Network for Guided Depth Super-Resolution

    Full text link
    In this paper we present a novel method to increase the spatial resolution of depth images. We combine a deep fully convolutional network with a non-local variational method in a deep primal-dual network. The joint network computes a noise-free, high-resolution estimate from a noisy, low-resolution input depth map. Additionally, a high-resolution intensity image is used to guide the reconstruction in the network. By unrolling the optimization steps of a first-order primal-dual algorithm and formulating it as a network, we can train our joint method end-to-end. This not only enables us to learn the weights of the fully convolutional network, but also to optimize all parameters of the variational method and its optimization procedure. The training of such a deep network requires a large dataset for supervision. Therefore, we generate high-quality depth maps and corresponding color images with a physically based renderer. In an exhaustive evaluation we show that our method outperforms the state-of-the-art on multiple benchmarks.Comment: BMVC 201

    A novel disparity-assisted block matching-based approach for super-resolution of light field images

    Get PDF
    Currently, available plenoptic imaging technology has limited resolution. That makes it challenging to use this technology in applications, where sharpness is essential, such as film industry. Previous attempts aimed at enhancing the spatial resolution of plenoptic light field (LF) images were based on block and patch matching inherited from classical image super-resolution, where multiple views were considered as separate frames. By contrast to these approaches, a novel super-resolution technique is proposed in this paper with a focus on exploiting estimated disparity information to reduce the matching area in the super-resolution process. We estimate the disparity information from the interpolated LR view point images (VPs). We denote our method as light field block matching super-resolution. We additionally combine our novel super-resolution method with directionally adaptive image interpolation from [1] to preserve sharpness of the high-resolution images. We prove a steady gain in the PSNR and SSIM quality of the super-resolved images for the resolution enhancement factor 8x8 as compared to the recent approaches and also to our previous work [2]

    Depth Estimation Through a Generative Model of Light Field Synthesis

    Full text link
    Light field photography captures rich structural information that may facilitate a number of traditional image processing and computer vision tasks. A crucial ingredient in such endeavors is accurate depth recovery. We present a novel framework that allows the recovery of a high quality continuous depth map from light field data. To this end we propose a generative model of a light field that is fully parametrized by its corresponding depth map. The model allows for the integration of powerful regularization techniques such as a non-local means prior, facilitating accurate depth map estimation.Comment: German Conference on Pattern Recognition (GCPR) 201

    Deep Markov Random Field for Image Modeling

    Full text link
    Markov Random Fields (MRFs), a formulation widely used in generative image modeling, have long been plagued by the lack of expressive power. This issue is primarily due to the fact that conventional MRFs formulations tend to use simplistic factors to capture local patterns. In this paper, we move beyond such limitations, and propose a novel MRF model that uses fully-connected neurons to express the complex interactions among pixels. Through theoretical analysis, we reveal an inherent connection between this model and recurrent neural networks, and thereon derive an approximated feed-forward network that couples multiple RNNs along opposite directions. This formulation combines the expressive power of deep neural networks and the cyclic dependency structure of MRF in a unified model, bringing the modeling capability to a new level. The feed-forward approximation also allows it to be efficiently learned from data. Experimental results on a variety of low-level vision tasks show notable improvement over state-of-the-arts.Comment: Accepted at ECCV 201
    corecore