106 research outputs found
Controlling Perceptual Factors in Neural Style Transfer
Neural Style Transfer has shown very exciting results enabling new forms of
image manipulation. Here we extend the existing method to introduce control
over spatial location, colour information and across spatial scale. We
demonstrate how this enhances the method by allowing high-resolution controlled
stylisation and helps to alleviate common failure cases such as applying ground
textures to sky regions. Furthermore, by decomposing style into these
perceptual factors we enable the combination of style information from multiple
sources to generate new, perceptually appealing styles from existing ones. We
also describe how these methods can be used to more efficiently produce large
size, high-quality stylisation. Finally we show how the introduced control
measures can be applied in recent methods for Fast Neural Style Transfer.Comment: Accepted at CVPR201
Portrait Stylization: Artistic Style Transfer with Auxiliary Networks for Human Face Stylization
Today's image style transfer methods have difficulty retaining humans face
individual features after the whole stylizing process. This occurs because the
features like face geometry and people's expressions are not captured by the
general-purpose image classifiers like the VGG-19 pre-trained models. This
paper proposes the use of embeddings from an auxiliary pre-trained face
recognition model to encourage the algorithm to propagate human face features
from the content image to the final stylized result.Comment: 12 pages, 12 figure
MindSpaces:Art-driven Adaptive Outdoors and Indoors Design
MindSpaces provides solutions for creating functionally and emotionally appealing architectural designs in urban spaces. Social media services, physiological sensing devices and video cameras provide data from sensing environments. State-of-the-Art technology including VR, 3D design tools, emotion extraction, visual behaviour analysis, and textual analysis will be incorporated in MindSpaces platform for analysing data and adapting the design of spaces.</p
Diversified Texture Synthesis with Feed-forward Networks
Recent progresses on deep discriminative and generative modeling have shown
promising results on texture synthesis. However, existing feed-forward based
methods trade off generality for efficiency, which suffer from many issues,
such as shortage of generality (i.e., build one network per texture), lack of
diversity (i.e., always produce visually identical output) and suboptimality
(i.e., generate less satisfying visual effects). In this work, we focus on
solving these issues for improved texture synthesis. We propose a deep
generative feed-forward network which enables efficient synthesis of multiple
textures within one single network and meaningful interpolation between them.
Meanwhile, a suite of important techniques are introduced to achieve better
convergence and diversity. With extensive experiments, we demonstrate the
effectiveness of the proposed model and techniques for synthesizing a large
number of textures and show its applications with the stylization.Comment: accepted by CVPR201
- …