57,736 research outputs found

    Photographic style transfer

    Get PDF
    © 2018, The Author(s). Image style transfer has attracted much attention in recent years. However, results produced by existing works still have lots of distortions. This paper investigates the CNN-based artistic style transfer work specifically and finds out the key reasons for distortion coming from twofold: the loss of spatial structures of content image during content-preserving process and unexpected geometric matching introduced by style transformation process. To tackle this problem, this paper proposes a novel approach consisting of a dual-stream deep convolution network as the loss network and edge-preserving filters as the style fusion model. Our key contribution is the introduction of an additional similarity loss function that constrains both the detail reconstruction and style transfer procedures. The qualitative evaluation shows that our approach successfully suppresses the distortions as well as obtains faithful stylized results compared to state-of-the-art methods

    Deep Photo Style Transfer

    Full text link
    This paper introduces a deep-learning approach to photographic style transfer that handles a large variety of image content while faithfully transferring the reference style. Our approach builds upon the recent work on painterly transfer that separates style from the content of an image by considering different layers of a neural network. However, as is, this approach is not suitable for photorealistic style transfer. Even when both the input and reference images are photographs, the output still exhibits distortions reminiscent of a painting. Our contribution is to constrain the transformation from the input to the output to be locally affine in colorspace, and to express this constraint as a custom fully differentiable energy term. We show that this approach successfully suppresses distortion and yields satisfying photorealistic style transfers in a broad variety of scenarios, including transfer of the time of day, weather, season, and artistic edits

    Fast photographic style transfer based on convolutional neural networks

    Get PDF
    © 2018 ACM. The techniques for photographic style transfer have been researched for a long time, which explores effective ways to transfer the style features of a reference photo onto another content photograph. Recent works based on convolutional neural networks present an effective solution for style transfer, especially for paintings. The artistic style transformation results are visually appealing, however, the photorealism is lost because of content-mismatching and distortions even when both input images are photographic. To tackle this challenge, this paper introduces a similarity loss function and a refinement method into the style transfer network. The similarity loss function can solve the content-mismatching problem, however, the distortion and noise artefacts may still exist in the stylized results due to the content-style trade-off. Hence, we add a post-processing refinement step to reduce the artefacts. The robustness and effectiveness of our approach has been evaluated through extensive experiments which show that our method can obtain finer content details and less artefacts than state-of-the-art methods, and transfer style faithfully. In addition, our approach is capable of processing photographic style transfer in almost real-time, which makes it a potential solution for video style transfer

    Structure Preserving regularizer for Neural Style Transfer

    Get PDF
    The aim of the project is to generate an image in the style of the image by a well-known artist. The experiment will use artificial neural networks to transfer the style of one image onto another. In Computer Vision context: capturing the content invariant that is the style of an image and applying it on the content of another image. Initially captures the tensors that we need from the content and style image and then we pass the input image which will initially be an image with noise and our algorithm will try to minimize the loss between the input and content image and that between input and style image thus capturing the essence of both the images into one. The traditional method of style transfer generated image has an artistic effect that is the model successfully capture the style of the image but does not preserve the structural content of the image. The proposed method uses a segmented version of images to faithfully transfer the style to semantic similar content. Also, a regularizer term modified in loss function that helps in avoiding style spill over and have photographic results

    Deep Bilateral Learning for Real-Time Image Enhancement

    Get PDF
    Performance is a critical challenge in mobile image processing. Given a reference imaging pipeline, or even human-adjusted pairs of images, we seek to reproduce the enhancements and enable real-time evaluation. For this, we introduce a new neural network architecture inspired by bilateral grid processing and local affine color transforms. Using pairs of input/output images, we train a convolutional neural network to predict the coefficients of a locally-affine model in bilateral space. Our architecture learns to make local, global, and content-dependent decisions to approximate the desired image transformation. At runtime, the neural network consumes a low-resolution version of the input image, produces a set of affine transformations in bilateral space, upsamples those transformations in an edge-preserving fashion using a new slicing node, and then applies those upsampled transformations to the full-resolution image. Our algorithm processes high-resolution images on a smartphone in milliseconds, provides a real-time viewfinder at 1080p resolution, and matches the quality of state-of-the-art approximation techniques on a large class of image operators. Unlike previous work, our model is trained off-line from data and therefore does not require access to the original operator at runtime. This allows our model to learn complex, scene-dependent transformations for which no reference implementation is available, such as the photographic edits of a human retoucher.Comment: 12 pages, 14 figures, Siggraph 201
    corecore