5 research outputs found
Intuitive, Interactive Beard and Hair Synthesis with Generative Models
We present an interactive approach to synthesizing realistic variations in
facial hair in images, ranging from subtle edits to existing hair to the
addition of complex and challenging hair in images of clean-shaven subjects. To
circumvent the tedious and computationally expensive tasks of modeling,
rendering and compositing the 3D geometry of the target hairstyle using the
traditional graphics pipeline, we employ a neural network pipeline that
synthesizes realistic and detailed images of facial hair directly in the target
image in under one second. The synthesis is controlled by simple and sparse
guide strokes from the user defining the general structural and color
properties of the target hairstyle. We qualitatively and quantitatively
evaluate our chosen method compared to several alternative approaches. We show
compelling interactive editing results with a prototype user interface that
allows novice users to progressively refine the generated image to match their
desired hairstyle, and demonstrate that our approach also allows for flexible
and high-fidelity scalp hair synthesis.Comment: To be presented in the 2020 Conference on Computer Vision and Pattern
Recognition (CVPR 2020, Oral Presentation). Supplementary video can be seen
at: https://www.youtube.com/watch?v=v4qOtBATrv
A Collaborative, Interactive and Context-Aware Drawing Agent for Co-Creative Design
Recent advances in text-conditioned generative models have provided us with
neural networks capable of creating images of astonishing quality, be they
realistic, abstract, or even creative. These models have in common that (more
or less explicitly) they all aim to produce a high-quality one-off output given
certain conditions, and in that they are not well suited for a creative
collaboration framework. Drawing on theories from cognitive science that model
how professional designers and artists think, we argue how this setting differs
from the former and introduce CICADA: a Collaborative, Interactive
Context-Aware Drawing Agent. CICADA uses a vector-based
synthesis-by-optimisation method to take a partial sketch (such as might be
provided by a user) and develop it towards a goal by adding and/or sensibly
modifying traces. Given that this topic has been scarcely explored, we also
introduce a way to evaluate desired characteristics of a model in this context
by means of proposing a diversity measure. CICADA is shown to produce sketches
of quality comparable to a human user's, enhanced diversity and most
importantly to be able to cope with change by continuing the sketch minding the
user's contributions in a flexible manner