445 research outputs found

    Structure Preserving Large Imagery Reconstruction

    Get PDF
    With the explosive growth of web-based cameras and mobile devices, billions of photographs are uploaded to the internet. We can trivially collect a huge number of photo streams for various goals, such as image clustering, 3D scene reconstruction, and other big data applications. However, such tasks are not easy due to the fact the retrieved photos can have large variations in their view perspectives, resolutions, lighting, noises, and distortions. Fur-thermore, with the occlusion of unexpected objects like people, vehicles, it is even more challenging to find feature correspondences and reconstruct re-alistic scenes. In this paper, we propose a structure-based image completion algorithm for object removal that produces visually plausible content with consistent structure and scene texture. We use an edge matching technique to infer the potential structure of the unknown region. Driven by the estimated structure, texture synthesis is performed automatically along the estimated curves. We evaluate the proposed method on different types of images: from highly structured indoor environment to natural scenes. Our experimental results demonstrate satisfactory performance that can be potentially used for subsequent big data processing, such as image localization, object retrieval, and scene reconstruction. Our experiments show that this approach achieves favorable results that outperform existing state-of-the-art techniques

    Example based texture synthesis and quantification of texture quality

    Get PDF
    Textures have been used effectively to create realistic environments for virtual worlds by reproducing the surface appearances. One of the widely-used methods for creating textures is the example based texture synthesis method. In this method of generating a texture of arbitrary size, an input image from the real world is provided. This input image is used for the basis of generating large textures. Various methods based on the underlying pattern of the image have been used to create these textures; however, the problem of finding an algorithm which provides a good output is still an open research issue. Moreover, the process of determining the best of the outputs produced by the existing methods is a subjective one and requires human intervention. No quantification measure exists to do a relative comparison between the outputs. This dissertation addresses both problems using a novel approach. The dissertation also proposes an improved algorithm for image inpainting which yields better results than existing methods. Firstly, this dissertation presents a methodology which uses a HSI (hue, saturation, intensity) color model in conjunction with the hybrid approach to improve the quality of the synthesized texture. Unlike the RGB (red, green, blue) color model, the HSI color model is more intuitive and closer to human perception. The hue, saturation and intensity are better indicators than the three color channels used in the RGB model. They represent the exact way, in which the eye sees color in the real world. Secondly, this dissertation addresses the issue of quantifying the quality of the output textures generated using the various texture synthesis methods. Quantifying the quality of the output generated is an important issue and a novel method using statistical measures and a color autocorrelogram has been proposed. It is a two step method; in the first step a measure of the energy, entropy and similar statistical measures helps determine the consistency of the output texture. In the second step an autocorelogram is used to analyze color images as well and quantify them effectively. Finally, this disseratation prsesents a method for improving image inpainting. In the case of inpainting, small sections of the image missing due to noise or other similar reasons can be reproduced using example based texture synthesis. The region of the image immediately surrounding the missing section is treated as sample input. Inpainting can also be used to alter images by removing large sections of the image and filling the removed section with the image data from the rest of the image. For this, a maximum edge detector method is proposed to determine the correct order of section filling and produces significantly better results

    Applying a Color Palette with Local Control using Diffusion Models

    Full text link
    We demonstrate two novel editing procedures in the context of fantasy card art. Palette transfer applies a specified reference palette to a given card. For fantasy art, the desired change in palette can be very large, leading to huge changes in the "look" of the art. We demonstrate that a pipeline of vector quantization; matching; and "vector dequantization" (using a diffusion model) produces successful extreme palette transfers. Segment control allows an artist to move one or more image segments, and to optionally specify the desired color of the result. The combination of these two types of edit yields valuable workflows, including: move a segment, then recolor; recolor, then force some segments to take a prescribed color. We demonstrate our methods on the challenging Yu-Gi-Oh card art dataset.Comment: 11 pages, 8 figure

    Fast Algorithms For Fragment Based Completion In Images Of Natural Scenes

    Get PDF
    Textures are used widely in computer graphics to represent fine visual details and produce realistic looking images. Often it is necessary to remove some foreground object from the scene. Removal of the portion creates one or more holes in the texture image. These holes need to be filled to complete the image. Various methods like clone brush strokes and compositing processes are used to carry out this completion. User skill is required in such methods. Texture synthesis can also be used to complete regions where the texture is stationary or structured. Reconstructing methods can be used to fill in large-scale missing regions by interpolation. Inpainting is suitable for relatively small, smooth and non-textured regions. A number of other approaches focus on the edge and contour completion aspect of the problem. In this thesis we present a novel approach for addressing this image completion problem. Our approach focuses on image based completion, with no knowledge of the underlying scene. In natural images there is a strong horizontal orientation of texture/color distribution. We exploit this fact in our proposed algorithm to fill in missing regions from natural images. We follow the principle of figural familiarity and use the image as our training set to complete the image

    Im2Pano3D: Extrapolating 360 Structure and Semantics Beyond the Field of View

    Full text link
    We present Im2Pano3D, a convolutional neural network that generates a dense prediction of 3D structure and a probability distribution of semantic labels for a full 360 panoramic view of an indoor scene when given only a partial observation (<= 50%) in the form of an RGB-D image. To make this possible, Im2Pano3D leverages strong contextual priors learned from large-scale synthetic and real-world indoor scenes. To ease the prediction of 3D structure, we propose to parameterize 3D surfaces with their plane equations and train the model to predict these parameters directly. To provide meaningful training supervision, we use multiple loss functions that consider both pixel level accuracy and global context consistency. Experiments demon- strate that Im2Pano3D is able to predict the semantics and 3D structure of the unobserved scene with more than 56% pixel accuracy and less than 0.52m average distance error, which is significantly better than alternative approaches.Comment: Video summary: https://youtu.be/Au3GmktK-S

    Rain Streaks Removal from Single Image

    Get PDF
    Rain removal from video is one of the challenging problems. There are very few methods which address the problem of rain removal from single image. Existing methods removes rain streaks from video not from single image. These methods capture non-rain data from successive images. This data is then utilized to replace rain-part in current images. This approach removes rain streaks from single image. Morphological Component Analysis (MCA) [9 - 13] decomposes image into Low Frequency (LF) and High Frequency (HF) parts using bilateral filter. High frequency part is then decomposed into rain-component and nonrain-component by performing dictionary learning and sparse coding [2]. Non-rain component contains image features from which rain streaks are removed. Non-rain component is mixed with Low Frequency (LF) image component to form original image from which rain steaks are removed. The Morphological Component Analysis (MCA) [9 - 13] is a allows us to separate features contained in an image when these features present different morphological aspects. MCA can be very useful for decomposing images into texture and piecewise smooth (cartoon) parts or for inpainting applications. DOI: 10.17762/ijritcc2321-8169.150615

    DATA-DRIVEN FACIAL IMAGE SYNTHESIS FROM POOR QUALITY LOW RESOLUTION IMAGE

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH
    corecore