In this paper, we introduce a novel approach for generating texture images of
infinite resolutions using Generative Adversarial Networks (GANs) based on a
patch-by-patch paradigm. Existing texture synthesis techniques often rely on
generating a large-scale texture using a one-forward pass to the generating
model, this limits the scalability and flexibility of the generated images. In
contrast, the proposed approach trains GANs models on a single texture image to
generate relatively small patches that are locally correlated and can be
seamlessly concatenated to form a larger image while using a constant GPU
memory footprint. Our method learns the local texture structure and is able to
generate arbitrary-size textures, while also maintaining coherence and
diversity. The proposed method relies on local padding in the generator to
ensure consistency between patches and utilizes spatial stochastic modulation
to allow for local variations and diversity within the large-scale image.
Experimental results demonstrate superior scalability compared to existing
approaches while maintaining visual coherence of generated textures