2,192 research outputs found

    Unsupervised Learning of Artistic Styles with Archetypal Style Analysis

    Get PDF
    In this paper, we introduce an unsupervised learning approach to automatically discover, summarize, and manipulate artistic styles from large collections of paintings. Our method is based on archetypal analysis, which is an unsupervised learning technique akin to sparse coding with a geometric interpretation. When applied to deep image representations from a collection of artworks, it learns a dictionary of archetypal styles, which can be easily visualized. After training the model, the style of a new image, which is characterized by local statistics of deep visual features, is approximated by a sparse convex combination of archetypes. This enables us to interpret which archetypal styles are present in the input image, and in which proportion. Finally, our approach allows us to manipulate the coefficients of the latent archetypal decomposition, and achieve various special effects such as style enhancement, transfer, and interpolation between multiple archetypes.Comment: Accepted at NIPS 2018, Montr\'eal, Canad

    Instant Neural Radiance Fields Stylization

    Full text link
    We present Instant Neural Radiance Fields Stylization, a novel approach for multi-view image stylization for the 3D scene. Our approach models a neural radiance field based on neural graphics primitives, which use a hash table-based position encoder for position embedding. We split the position encoder into two parts, the content and style sub-branches, and train the network for normal novel view image synthesis with the content and style targets. In the inference stage, we execute AdaIN to the output features of the position encoder, with content and style voxel grid features as reference. With the adjusted features, the stylization of novel view images could be obtained. Our method extends the style target from style images to image sets of scenes and does not require additional network training for stylization. Given a set of images of 3D scenes and a style target(a style image or another set of 3D scenes), our method can generate stylized novel views with a consistent appearance at various view angles in less than 10 minutes on modern GPU hardware. Extensive experimental results demonstrate the validity and superiority of our method

    Divide and Conquer in Neural Style Transfer for Video

    Get PDF
    Neural Style Transfer is a class of neural algorithms designed to redraw a given image in the style of another image, traditionally a famous painting, while preserving the underlying details. Applying this process to a video requires stylizing each of its component frames, and the stylized frames must have temporal consistency between them to prevent flickering and other undesirable features. Current algorithms accommodate these constraints at the expense of speed. We propose an algorithm called Distributed Artistic Videos and demonstrate its capacity to produce stylized videos over ten times faster than the current state-of-the-art with no reduction in output quality. Through the use of an 8-node computing cluster, we reduce the average time required to stylize a video by 92%—from hours to minutes---compared to the most recent algorithm of this kind on the same equipment and input. This allows the stylization of videos that are longer and higher-resolution than previously feasible
    • …
    corecore