431 research outputs found
Digital image processing of the Ghent altarpiece : supporting the painting's study and conservation treatment
In this article, we show progress in certain image processing
techniques that can support the physical restoration of the painting, its art-historical analysis, or both. We show how analysis of the crack patterns could indicate possible areas of overpaint, which may be of great value for the physical restoration campaign, after further validation. Next, we explore how digital image inpainting can serve as a simulation for the restoration of paint losses. Finally, we explore how the statistical analysis of the relatively simple and frequently recurring objects (such as pearls in this masterpiece) may characterize the consistency of the painter’s style and thereby aid both art-historical interpretation and physical restoration campaign
Contextual-based Image Inpainting: Infer, Match, and Translate
We study the task of image inpainting, which is to fill in the missing region
of an incomplete image with plausible contents. To this end, we propose a
learning-based approach to generate visually coherent completion given a
high-resolution image with missing components. In order to overcome the
difficulty to directly learn the distribution of high-dimensional image data,
we divide the task into inference and translation as two separate steps and
model each step with a deep neural network. We also use simple heuristics to
guide the propagation of local textures from the boundary to the hole. We show
that, by using such techniques, inpainting reduces to the problem of learning
two image-feature translation functions in much smaller space and hence easier
to train. We evaluate our method on several public datasets and show that we
generate results of better visual quality than previous state-of-the-art
methods.Comment: ECCV 2018 camera read
Multi-View Frame Reconstruction with Conditional GAN
Multi-view frame reconstruction is an important problem particularly when
multiple frames are missing and past and future frames within the camera are
far apart from the missing ones. Realistic coherent frames can still be
reconstructed using corresponding frames from other overlapping cameras. We
propose an adversarial approach to learn the spatio-temporal representation of
the missing frame using conditional Generative Adversarial Network (cGAN). The
conditional input to each cGAN is the preceding or following frames within the
camera or the corresponding frames in other overlapping cameras, all of which
are merged together using a weighted average. Representations learned from
frames within the camera are given more weight compared to the ones learned
from other cameras when they are close to the missing frames and vice versa.
Experiments on two challenging datasets demonstrate that our framework produces
comparable results with the state-of-the-art reconstruction method in a single
camera and achieves promising performance in multi-camera scenario.Comment: 5 pages, 4 figures, 3 tables, Accepted at IEEE Global Conference on
Signal and Information Processing, 201
Fully Context-Aware Image Inpainting with a Learned Semantic Pyramid
Restoring reasonable and realistic content for arbitrary missing regions in
images is an important yet challenging task. Although recent image inpainting
models have made significant progress in generating vivid visual details, they
can still lead to texture blurring or structural distortions due to contextual
ambiguity when dealing with more complex scenes. To address this issue, we
propose the Semantic Pyramid Network (SPN) motivated by the idea that learning
multi-scale semantic priors from specific pretext tasks can greatly benefit the
recovery of locally missing content in images. SPN consists of two components.
First, it distills semantic priors from a pretext model into a multi-scale
feature pyramid, achieving a consistent understanding of the global context and
local structures. Within the prior learner, we present an optional module for
variational inference to realize probabilistic image inpainting driven by
various learned priors. The second component of SPN is a fully context-aware
image generator, which adaptively and progressively refines low-level visual
representations at multiple scales with the (stochastic) prior pyramid. We
train the prior learner and the image generator as a unified model without any
post-processing. Our approach achieves the state of the art on multiple
datasets, including Places2, Paris StreetView, CelebA, and CelebA-HQ, under
both deterministic and probabilistic inpainting setups.Comment: This work has been submitted to the IEEE for possible publication.
Copyright may be transferred without notice, after which this version may no
longer be accessibl
- …