3,251 research outputs found
Contextual-based Image Inpainting: Infer, Match, and Translate
We study the task of image inpainting, which is to fill in the missing region
of an incomplete image with plausible contents. To this end, we propose a
learning-based approach to generate visually coherent completion given a
high-resolution image with missing components. In order to overcome the
difficulty to directly learn the distribution of high-dimensional image data,
we divide the task into inference and translation as two separate steps and
model each step with a deep neural network. We also use simple heuristics to
guide the propagation of local textures from the boundary to the hole. We show
that, by using such techniques, inpainting reduces to the problem of learning
two image-feature translation functions in much smaller space and hence easier
to train. We evaluate our method on several public datasets and show that we
generate results of better visual quality than previous state-of-the-art
methods.Comment: ECCV 2018 camera read
Perspective-aware texture analysis and synthesis
The original publication is available at www.springerlink.comInternational audienceThis paper presents a novel texture synthesis scheme for anisotropic 2D textures based on perspective feature analysis and energy optimization. Given an example texture, the synthesis process starts with analyzing the texel (TEXture ELement) scale variations to obtain the perspective map (scale map). Feature mask and simple user-assisted scale extraction operations including slant and tilt angles assignment and scale value editing are applied. The scale map represents the global variations of the texel scales in the sample texture. Then, we extend 2D texture optimization techniques to synthesize these kinds of perspectively featured textures. The non-parametric texture optimization approach is integrated with histogram matching, which forces the global statics of the texel scale variations of the synthesized texture to match those of the example. We also demonstrate that our method is well-suited for image completion of a perspectively featured texture region in a digital photo
Recommended from our members
Time-Varying Textures
Essentially all computer graphics rendering assumes that the reflectance and texture of surfaces is a static phenomenon. Yet, there is an abundance of materials in nature whose appearance varies dramatically with time, such as cracking paint, growing grass, or ripening banana skins. In this paper, we take a significant step towards addressing this problem, investigating a new class of time-varying textures. We make three contributions. First, we describe the carefully controlled acquisition of datasets of a variety of natural processes including the growth of grass, the accumulation of snow, and the oxidation of copper. Second, we show how to adapt quilting-based methods to time-varying texture synthesis, addressing the important challenges of maintaining temporal coherence, efficient synthesis on large time-varying datasets, and reducing visual artifacts specific to time-varying textures. Finally, we show how simple procedural techniques can be used to control the evolution of the results, such as allowing for a faster growth of grass in well lit (as opposed to shadowed) areas
Motion parallax for 360° RGBD video
We present a method for adding parallax and real-time playback of 360° videos in Virtual Reality headsets. In current video players, the playback does not respond to translational head movement, which reduces the feeling of immersion, and causes motion sickness for some viewers. Given a 360° video and its corresponding depth (provided by current stereo 360° stitching algorithms), a naive image-based rendering approach would use the depth to generate a 3D mesh around the viewer, then translate it appropriately as the viewer moves their head. However, this approach breaks at depth discontinuities, showing visible distortions, whereas cutting the mesh at such discontinuities leads to ragged silhouettes and holes at disocclusions. We address these issues by improving the given initial depth map to yield cleaner, more natural silhouettes. We rely on a three-layer scene representation, made up of a foreground layer and two static background layers, to handle disocclusions by propagating information from multiple frames for the first background layer, and then inpainting for the second one. Our system works with input from many of today''s most popular 360° stereo capture devices (e.g., Yi Halo or GoPro Odyssey), and works well even if the original video does not provide depth information. Our user studies confirm that our method provides a more compelling viewing experience than without parallax, increasing immersion while reducing discomfort and nausea
- …