5 research outputs found

    Human-Centric Deep Generative Models: The Blessing and The Curse

    Get PDF
    Over the past years, deep neural networks have achieved significant progress in a wide range of real-world applications. In particular, my research puts a focused lens in deep generative models, a neural network solution that proves effective in visual (re)creation. But is generative modeling a niche topic that should be researched on its own? My answer is critically no. In the thesis, I present the two sides of deep generative models, their blessing and their curse to human beings. Regarding what can deep generative models do for us, I demonstrate the improvement in performance and steerability of visual (re)creation. Regarding what can we do for deep generative models, my answer is to mitigate the security concerns of DeepFakes and improve minority inclusion of deep generative models. For the performance of deep generative models, I probe on applying attention modules and dual contrastive loss to generative adversarial networks (GANs), which pushes photorealistic image generation to a new state of the art. For the steerability, I introduce Texture Mixer, a simple yet effective approach to achieve steerable texture synthesis and blending. For the security, my research spans over a series of GAN fingerprinting solutions that enable the detection and attribution of GAN-generated image misuse. For the inclusion, I investigate the biased misbehavior of generative models and present my solution in enhancing the minority inclusion of GAN models over underrepresented image attributes. All in all, I propose to project actionable insights to the applications of deep generative models, and finally contribute to human-generator interaction

    Steerable Texture Synthesis

    No full text
    Texture synthesis is typically concerned with the creation of an arbitrarily sized texture from a small sample, where the pattern of the generated texture should be perceived as resembling the example. Most of the current work follows a Markov model approach. The texture is generated by finding best matching pixels or patches in the sample and then copying them to the target. Here we extend this concept to incorporate arbitrary filters acting on the sample before matching and transferring. The filters may vary over the generated texture. Steering the filters with properties connected to the output image allows generating a variety of effects

    Steerable texture synthesis

    No full text
    Texture synthesis is typically concerned with the creation of an arbitrarily sized texture from a small sample, where the pattern of the generated texture should be perceived as resembling the example. Most of the current work follows a Markov model approach. The texture is generated by finding best matching pixels or patches in the sample and then copying them to the target. Here we extend this concept to incorporate arbitrary filters acting on the sample before matching and transferring. The filters may vary over the generated texture. Steering the filters with properties connected to the output image allows generating a variety of effects

    Steerable texture synthesis

    No full text
    Synthesis of textures is a very popular and active area of research; the applications and the areas of interest are various and significant. In the last years, much work has been done in order to optimize the synthesis process, speeding up the methods and minimizing the processing errors. Recent efforts in this area are concentrated in producing flexible algorithms and in introducing tools, which augment the texture with artistic effects. In this work we present a novel approach for flexible texture synthesis: starting from an input sample the process generates an arbitrary resolution output image; the characteristics of this output texture are user-defined, as the user can freely choose a force field, determine color variations and add further features. The synthesis process is fully automatic and does not require additional intervention. The resulting outputs can be interpreted as filtered versions of a texture or as being obtained through transfer functions

    Steerable texture synthesis for vector field visualization

    Get PDF
    In this work, a novel approach to scientific visualization and steerable texture synthesis is presented. Vector field visualization and synthesis techniques for controlled, field-driven texture generation are proposed, discussed, and extended to allow more control and degrees of freedom in the image creation. Concepts from perception and cognition, as well as statistical theory for standard texture synthesis, were investigated and used to motivate and improve the proposed techniques. The approach results to be general, flexible, and open to further extensions and integrations. Vector fields can be visualized in a straightforward manner by setting the color of output pixels on the base of computed similarity functions. The proposed method is texture-based and it locally adapts and transforms a chosen basic pattern of an anisotropic texture to represent the features and the variation of the field. In general, it allows any procedural or manual way to define a mapping from vector space to example image space, offering arbitrary degrees of freedom in representing the appearance of the resulting field. The approach to visualization of vectorial data sets can be interpreted as an hybrid algorithm, which combines features of direct intuitive visualization, such as simplicity, intuitivity, generality, together with features from dense visualization techniques, such as powerful information encoding, locality of calculation, accuracy. The variety of tasks and the different level of expertise and experience of users motivate and strongly require such versatility. The steerable generation of non-homogeneous textures offers several degrees of freedom in the synthesis process, allowing a variety of effects for appealing output images. Textural elements used as primitive provide for this task a numerous set of visual dimensions for arbitrary variation. In general, the proposed techniques bring together concepts from human vision and perception, statistics and texture synthesis, and visualization, in an interesting interdisciplinary research that promises encouraging results for several fields of applications
    corecore