89,797 research outputs found
Perspective-aware texture analysis and synthesis
The original publication is available at www.springerlink.comInternational audienceThis paper presents a novel texture synthesis scheme for anisotropic 2D textures based on perspective feature analysis and energy optimization. Given an example texture, the synthesis process starts with analyzing the texel (TEXture ELement) scale variations to obtain the perspective map (scale map). Feature mask and simple user-assisted scale extraction operations including slant and tilt angles assignment and scale value editing are applied. The scale map represents the global variations of the texel scales in the sample texture. Then, we extend 2D texture optimization techniques to synthesize these kinds of perspectively featured textures. The non-parametric texture optimization approach is integrated with histogram matching, which forces the global statics of the texel scale variations of the synthesized texture to match those of the example. We also demonstrate that our method is well-suited for image completion of a perspectively featured texture region in a digital photo
Neural Textured Deformable Meshes for Robust Analysis-by-Synthesis
Human vision demonstrates higher robustness than current AI algorithms under
out-of-distribution scenarios. It has been conjectured such robustness benefits
from performing analysis-by-synthesis. Our paper formulates triple vision tasks
in a consistent manner using approximate analysis-by-synthesis by
render-and-compare algorithms on neural features. In this work, we introduce
Neural Textured Deformable Meshes, which involve the object model with
deformable geometry that allows optimization on both camera parameters and
object geometries. The deformable mesh is parameterized as a neural field, and
covered by whole-surface neural texture maps, which are trained to have spatial
discriminability. During inference, we extract the feature map of the test
image and subsequently optimize the 3D pose and shape parameters of our model
using differentiable rendering to best reconstruct the target feature map. We
show that our analysis-by-synthesis is much more robust than conventional
neural networks when evaluated on real-world images and even in challenging
out-of-distribution scenarios, such as occlusion and domain shift. Our
algorithms are competitive with standard algorithms when tested on conventional
performance measures
Multi-texture image segmentation
Visual perception of images is closely related to the recognition of the different
texture areas within an image. Identifying the boundaries of these regions is an important
step in image analysis and image understanding. This thesis presents supervised and
unsupervised methods which allow an efficient segmentation of the texture regions within
multi-texture images.
The features used by the methods are based on a measure of the fractal dimension
of surfaces in several directions, which allows the transformation of the image into a set
of feature images, however no direct measurement of the fractal dimension is made. Using
this set of features, supervised and unsupervised, statistical processing schemes are
presented which produce low classification error rates. Natural texture images are
examined with particular application to the analysis of sonar images of the seabed.
A number of processes based on fractal models for texture synthesis are also
presented. These are used to produce realistic images of natural textures, again with
particular reference to sonar images of the seabed, and which show the importance of
phase and directionality in our perception of texture. A further extension is shown to give
possible uses for image coding and object identification
Recommended from our members
Filters, Random Fields and Maximum Entropy (FRAME): Towards a Unified Theory for Texture Modeling
This article presents a statistical theory for texture modeling. This theory combines filtering theory and Markov random field modeling through the maximum entropy principle, and interprets and clarifies many previous concepts and methods for texture analysis and synthesis from a unified point of view. Our theory characterizes the ensemble of images with the same texture appearance by a probability distribution on a random field, and the objective of texture modeling is to make inference about , given a set of observed texture examples. In our theory, texture modeling consists of two steps. (1) A set of filters is selected from a general filter bank to capture features of the texture, these filters are applied to observed texture images, and the histograms of the filtered images are extracted. These histograms are estimates of the marginal distributions of . This step is called feature extraction. (2) The maximum entropy principle is employed to derive a distribution , which is restricted to have the same marginal distributions as those in (1). This is considered as an estimate of . This step is called feature fusion. A stepwise algorithm is proposed to choose filters from a general filter bank. The resulting model, called FRAME (Filters, Random fields And Maximum Entropy), is a Markov random field (MRF) model, but with a much enriched vocabulary and hence much stronger descriptive ability than the previous MRF models used for texture modeling. Gibbs sampler is adopted to synthesize texture images by drawing typical samples from , thus the model is verified by seeing whether the synthesized texture images have similar visual appearances to the texture images being modeled. Experiments on a variety of 1D and 2D textures are described to illustrate our theory and to show the performance of our algorithms. These experiments demonstrate that many textures which are previously considered as from different categories can be modeled and synthesized in a common framework.Mathematic
Texture Synthesis Through Convolutional Neural Networks and Spectrum Constraints
This paper presents a significant improvement for the synthesis of texture
images using convolutional neural networks (CNNs), making use of constraints on
the Fourier spectrum of the results. More precisely, the texture synthesis is
regarded as a constrained optimization problem, with constraints conditioning
both the Fourier spectrum and statistical features learned by CNNs. In contrast
with existing methods, the presented method inherits from previous CNN
approaches the ability to depict local structures and fine scale details, and
at the same time yields coherent large scale structures, even in the case of
quasi-periodic images. This is done at no extra computational cost. Synthesis
experiments on various images show a clear improvement compared to a recent
state-of-the art method relying on CNN constraints only
- …