60,919 research outputs found
Enhancing Perceptual Attributes with Bayesian Style Generation
Deep learning has brought an unprecedented progress in computer vision and
significant advances have been made in predicting subjective properties
inherent to visual data (e.g., memorability, aesthetic quality, evoked
emotions, etc.). Recently, some research works have even proposed deep learning
approaches to modify images such as to appropriately alter these properties.
Following this research line, this paper introduces a novel deep learning
framework for synthesizing images in order to enhance a predefined perceptual
attribute. Our approach takes as input a natural image and exploits recent
models for deep style transfer and generative adversarial networks to change
its style in order to modify a specific high-level attribute. Differently from
previous works focusing on enhancing a specific property of a visual content,
we propose a general framework and demonstrate its effectiveness in two use
cases, i.e. increasing image memorability and generating scary pictures. We
evaluate the proposed approach on publicly available benchmarks, demonstrating
its advantages over state of the art methods.Comment: ACCV-201
Unsupervised Learning of Artistic Styles with Archetypal Style Analysis
In this paper, we introduce an unsupervised learning approach to
automatically discover, summarize, and manipulate artistic styles from large
collections of paintings. Our method is based on archetypal analysis, which is
an unsupervised learning technique akin to sparse coding with a geometric
interpretation. When applied to deep image representations from a collection of
artworks, it learns a dictionary of archetypal styles, which can be easily
visualized. After training the model, the style of a new image, which is
characterized by local statistics of deep visual features, is approximated by a
sparse convex combination of archetypes. This enables us to interpret which
archetypal styles are present in the input image, and in which proportion.
Finally, our approach allows us to manipulate the coefficients of the latent
archetypal decomposition, and achieve various special effects such as style
enhancement, transfer, and interpolation between multiple archetypes.Comment: Accepted at NIPS 2018, Montr\'eal, Canad
Transport-Based Neural Style Transfer for Smoke Simulations
Artistically controlling fluids has always been a challenging task.
Optimization techniques rely on approximating simulation states towards target
velocity or density field configurations, which are often handcrafted by
artists to indirectly control smoke dynamics. Patch synthesis techniques
transfer image textures or simulation features to a target flow field. However,
these are either limited to adding structural patterns or augmenting coarse
flows with turbulent structures, and hence cannot capture the full spectrum of
different styles and semantically complex structures. In this paper, we propose
the first Transport-based Neural Style Transfer (TNST) algorithm for volumetric
smoke data. Our method is able to transfer features from natural images to
smoke simulations, enabling general content-aware manipulations ranging from
simple patterns to intricate motifs. The proposed algorithm is physically
inspired, since it computes the density transport from a source input smoke to
a desired target configuration. Our transport-based approach allows direct
control over the divergence of the stylization velocity field by optimizing
incompressible and irrotational potentials that transport smoke towards
stylization. Temporal consistency is ensured by transporting and aligning
subsequent stylized velocities, and 3D reconstructions are computed by
seamlessly merging stylizations from different camera viewpoints.Comment: ACM Transaction on Graphics (SIGGRAPH ASIA 2019), additional
materials: http://www.byungsoo.me/project/neural-flow-styl
- …