DrawGAN: Multi-view Generative Model Inspired By The Artist's Drawing Method

Abstract

We present a novel approach for modeling artists' drawing processes using an architecture that combines an unconditional generative adversarial network (GAN) with a multi-view generator and multi-discriminator. Our method excels in synthesizing various types of picture drawing, including line drawing, shading, and color drawing, achieving high quality and robustness. Notably, our approach surpasses the existing state-of-the-art unconditional GANs. The key novelty of our approach lies in its architecture design, which closely resembles the typical sequence of an artist's drawing process, leading to significantly enhanced image quality. Through experimental results on few-shot datasets, we demonstrate the potential of leveraging a multi-view generative model to enhance feature knowledge and modulate image generation processes. Our proposed method holds great promise for advancing AI in the visual arts field and opens up new avenues for research and creative practices

    Similar works

    Full text

    thumbnail-image

    Available Versions