42 research outputs found

    Learning to Enhance RGB and Depth Images with Guidance

    Get PDF
    Image enhancement improves the visual quality of the input image to better identify key features and make it more suitable for other vision applications. Structure degradation remains a challenging problem in image enhancement, which refers to blurry edges or discontinuous structures due to unbalanced or inconsistent intensity transitions on structural regions. To overcome this issue, it is popular to make use of a guidance image to provide additional structural cues. In this thesis, we focus on two image enhancement tasks, i.e., RGB image smoothing and depth image completion. Through the two research problems, we aim to have a better understanding of what constitutes suitable guidance and how its proper use can benefit the reduction of structure degradation in image enhancement. Image smoothing retains salient structures and removes insignificant textures in an image. Structure degradation results from the difficulty in distinguishing structures and textures with low-level cues. Structures may be inevitably blurred if the filter tries to remove some strong textures that have high contrast. Moreover, these strong textures may also be mistakenly retained as structures. We address this issue by applying two forms of guidance for structures and textures respectively. We first design a kernel-based double-guided filter (DGF), where we adopt semantic edge detection as structure guidance, and texture decomposition as texture guidance. The DGF is the first kernel filter that simultaneously leverages structure guidance and texture guidance to be both ''structure-aware'' and ''texture-aware''. Considering that textures present high randomness and variations in spatial distribution and intensities, it is not robust to localize and identify textures with hand-crafted features. Hence, we take advantage of deep learning for richer feature extraction and better generalization. Specifically, we generate synthetic data by blending natural textures with clean structure-only images. With the data, we build a texture prediction network (TPN) that estimates the location and magnitude of textures. We then combine the texture prediction results from TPN with a semantic structure prediction network so that the final texture and structure aware filtering network (TSAFN) is able to distinguish structures and textures more effectively. Our model achieves superior smoothing results than existing filters. Depth completion recovers dense depth from sparse measurements, e.g., LiDAR. Existing depth-only methods use sparse depth as the only input and suffer from structure degradation, i.e., failing to recover semantically consistent boundaries or small/thin objects due to (1) the sparse nature of depth points and (2) the lack of images to provide structural cues. In the thesis, we deal with the structure degradation issue by using RGB image guidance in both supervised and unsupervised depth-only settings. For the supervised model, the unique design is that it simultaneously outputs a reconstructed image and a dense depth map. Specifically, we treat image reconstruction from sparse depth as an auxiliary task during training that is supervised by the image. For the unsupervised model, we regard dense depth as a reconstructed result of the sparse input, and formulate our model as an auto-encoder. To reduce structure degradation, we employ the image to guide latent features by penalizing their difference in the training process. The image guidance loss in both models enables them to acquire more dense and structural cues that are beneficial for producing more accurate and consistent depth values. For inference, the two models only take sparse depth as input and no image is required. On the KITTI Depth Completion Benchmark, we validate the effectiveness of the proposed image guidance through extensive experiments and achieve competitive performance over state-of-the-art supervised and unsupervised methods. Our approach is also applicable to indoor scenes

    Graph Spectral Image Processing

    Full text link
    Recent advent of graph signal processing (GSP) has spurred intensive studies of signals that live naturally on irregular data kernels described by graphs (e.g., social networks, wireless sensor networks). Though a digital image contains pixels that reside on a regularly sampled 2D grid, if one can design an appropriate underlying graph connecting pixels with weights that reflect the image structure, then one can interpret the image (or image patch) as a signal on a graph, and apply GSP tools for processing and analysis of the signal in graph spectral domain. In this article, we overview recent graph spectral techniques in GSP specifically for image / video processing. The topics covered include image compression, image restoration, image filtering and image segmentation

    On Deep Image Deblurring: The Blur Factorization Approach

    Get PDF
    This thesis investigated whether the single image deblurring problem could be factorized into subproblems of camera shake and object motion blur removal for enhanced performance. Two deep learning-based deblurring methods were introduced to answer this question, both following a variation of the proposed blur factorization strategy. Furthermore, a novel pipeline was developed for generating synthetic blurry images, as no existing datasets or data generation methods could meet the requirements of the suggested deblurring models. The proposed data generation pipeline allows for generating three blurry versions of a single ground truth image, one with both blur types, another with camera shake blur alone, and a third with only object motion blur. The pipeline, based on mathematical models of real-world blur formation, was used to generate a dataset of 2850 triplets of blurry images, which was further divided into a training set of 2500 and a test set of 350 triplets, plus the sharp ground truth images. The datasets were used to train and test both proposed methods. The proposed methods achieved satisfactory performance. Two variations of the first method, based on strict factorization into subproblems, were tested. The variations differed from each other by which order the blur types were removed. The performance of the pipeline that tried to remove object motion blur first proved superior to that achieved by the pipeline with the reverse processing order. However, both variations were still far inferior compared to the control test, where both blurs were removed simultaneously. The second method, based on joint training of two sub-models, achieved more promising test results. Two variations out of the four tested outperformed the corresponding control test model, albeit by relatively small margins. The variations differed by the processing order and weighting of the loss functions between the sub-models. Both variations that outperformed the control test model were trained to remove object motion blur first, although the loss function weights were set so that the pipelines’ main focus was on the final sharp images. The performance improvements demonstrate that the proposed blur factorization strategy had a positive impact on deblurring results. Still, even the second method can be deemed only partly successful. This is because a greater performance improvement was gained with an alternative strategy resulting in a model with the same number of parameters as the proposed approach

    Viewing-Distance Aware Super-Resolution for High-Definition Display

    Full text link

    Recent Advances in Image Restoration with Applications to Real World Problems

    Get PDF
    In the past few decades, imaging hardware has improved tremendously in terms of resolution, making widespread usage of images in many diverse applications on Earth and planetary missions. However, practical issues associated with image acquisition are still affecting image quality. Some of these issues such as blurring, measurement noise, mosaicing artifacts, low spatial or spectral resolution, etc. can seriously affect the accuracy of the aforementioned applications. This book intends to provide the reader with a glimpse of the latest developments and recent advances in image restoration, which includes image super-resolution, image fusion to enhance spatial, spectral resolution, and temporal resolutions, and the generation of synthetic images using deep learning techniques. Some practical applications are also included
    corecore