1,084 research outputs found

    Bayesian Reconstruction of Missing Observations

    Get PDF
    We focus on an interpolation method referred to Bayesian reconstruction in this paper. Whereas in standard interpolation methods missing data are interpolated deterministically, in Bayesian reconstruction, missing data are interpolated probabilistically using a Bayesian treatment. In this paper, we address the framework of Bayesian reconstruction and its application to the traffic data reconstruction problem in the field of traffic engineering. In the latter part of this paper, we describe the evaluation of the statistical performance of our Bayesian traffic reconstruction model using a statistical mechanical approach and clarify its statistical behavior

    Digital image processing of the Ghent altarpiece : supporting the painting's study and conservation treatment

    Get PDF
    In this article, we show progress in certain image processing techniques that can support the physical restoration of the painting, its art-historical analysis, or both. We show how analysis of the crack patterns could indicate possible areas of overpaint, which may be of great value for the physical restoration campaign, after further validation. Next, we explore how digital image inpainting can serve as a simulation for the restoration of paint losses. Finally, we explore how the statistical analysis of the relatively simple and frequently recurring objects (such as pearls in this masterpiece) may characterize the consistency of the painter’s style and thereby aid both art-historical interpretation and physical restoration campaign

    r-BTN: Cross-domain Face Composite and Synthesis from Limited Facial Patches

    Full text link
    We start by asking an interesting yet challenging question, "If an eyewitness can only recall the eye features of the suspect, such that the forensic artist can only produce a sketch of the eyes (e.g., the top-left sketch shown in Fig. 1), can advanced computer vision techniques help generate the whole face image?" A more generalized question is that if a large proportion (e.g., more than 50%) of the face/sketch is missing, can a realistic whole face sketch/image still be estimated. Existing face completion and generation methods either do not conduct domain transfer learning or can not handle large missing area. For example, the inpainting approach tends to blur the generated region when the missing area is large (i.e., more than 50%). In this paper, we exploit the potential of deep learning networks in filling large missing region (e.g., as high as 95% missing) and generating realistic faces with high-fidelity in cross domains. We propose the recursive generation by bidirectional transformation networks (r-BTN) that recursively generates a whole face/sketch from a small sketch/face patch. The large missing area and the cross domain challenge make it difficult to generate satisfactory results using a unidirectional cross-domain learning structure. On the other hand, a forward and backward bidirectional learning between the face and sketch domains would enable recursive estimation of the missing region in an incremental manner (Fig. 1) and yield appealing results. r-BTN also adopts an adversarial constraint to encourage the generation of realistic faces/sketches. Extensive experiments have been conducted to demonstrate the superior performance from r-BTN as compared to existing potential solutions.Comment: Accepted by AAAI 201

    Assisting classical paintings restoration : efficient paint loss detection and descriptor-based inpainting using shared pretraining

    Get PDF
    In the restoration process of classical paintings, one of the tasks is to map paint loss for documentation and analysing purposes. Because this is such a sizable and tedious job automatic techniques are highly on demand. The currently available tools allow only rough mapping of the paint loss areas while still requiring considerable manual work. We develop here a learning method for paint loss detection that makes use of multimodal image acquisitions and we apply it within the current restoration of the Ghent Altarpiece. Our neural network architecture is inspired by a multiscale convolutional neural network known as U-Net. In our proposed model, the downsampling of the pooling layers is omitted to enforce translation invariance and the convolutional layers are replaced with dilated convolutions. The dilated convolutions lead to denser computations and improved classification accuracy. Moreover, the proposed method is designed such to make use of multimodal data, which are nowadays routinely acquired during the restoration of master paintings, and which allow more accurate detection of features of interest, including paint losses. Our focus is on developing a robust approach with minimal user-interference. Adequate transfer learning is here crucial in order to extend the applicability of pre-trained models to the paintings that were not included in the training set, with only modest additional re-training. We introduce a pre-training strategy based on a multimodal, convolutional autoencoder and we fine-tune the model when applying it to other paintings. We evaluate the results by comparing the detected paint loss maps to manual expert annotations and also by running virtual inpainting based on the detected paint losses and comparing the virtually inpainted results with the actual physical restorations. The results indicate clearly the efficacy of the proposed method and its potential to assist in the art conservation and restoration processes

    Example based texture synthesis and quantification of texture quality

    Get PDF
    Textures have been used effectively to create realistic environments for virtual worlds by reproducing the surface appearances. One of the widely-used methods for creating textures is the example based texture synthesis method. In this method of generating a texture of arbitrary size, an input image from the real world is provided. This input image is used for the basis of generating large textures. Various methods based on the underlying pattern of the image have been used to create these textures; however, the problem of finding an algorithm which provides a good output is still an open research issue. Moreover, the process of determining the best of the outputs produced by the existing methods is a subjective one and requires human intervention. No quantification measure exists to do a relative comparison between the outputs. This dissertation addresses both problems using a novel approach. The dissertation also proposes an improved algorithm for image inpainting which yields better results than existing methods. Firstly, this dissertation presents a methodology which uses a HSI (hue, saturation, intensity) color model in conjunction with the hybrid approach to improve the quality of the synthesized texture. Unlike the RGB (red, green, blue) color model, the HSI color model is more intuitive and closer to human perception. The hue, saturation and intensity are better indicators than the three color channels used in the RGB model. They represent the exact way, in which the eye sees color in the real world. Secondly, this dissertation addresses the issue of quantifying the quality of the output textures generated using the various texture synthesis methods. Quantifying the quality of the output generated is an important issue and a novel method using statistical measures and a color autocorrelogram has been proposed. It is a two step method; in the first step a measure of the energy, entropy and similar statistical measures helps determine the consistency of the output texture. In the second step an autocorelogram is used to analyze color images as well and quantify them effectively. Finally, this disseratation prsesents a method for improving image inpainting. In the case of inpainting, small sections of the image missing due to noise or other similar reasons can be reproduced using example based texture synthesis. The region of the image immediately surrounding the missing section is treated as sample input. Inpainting can also be used to alter images by removing large sections of the image and filling the removed section with the image data from the rest of the image. For this, a maximum edge detector method is proposed to determine the correct order of section filling and produces significantly better results
    corecore