3,759 research outputs found

    Stochastic Modeling and Resolution-Free Rendering of Film Grain

    Get PDF
    The realistic synthesis and rendering of film grain is a crucial goal for many amateur and professional photographers and film-makers whose artistic works require the authentic feel of analog photography. The objective of this work is to propose an algorithm that reproduces the visual aspect of film grain texture on any digital image. Previous approaches to this problem either propose unrealistic models or simply blend scanned images of film grain with the digital image, in which case the result is inevitably limited by the quality and resolution of the initial scan. In this work, we introduce a stochastic model to approximate the physical reality of film grain, and propose a resolution-free rendering algorithm to simulate realistic film grain for any digital input image. By varying the parameters of this model, we can achieve a wide range of grain types. We demonstrate this by comparing our results with film grain examples from dedicated software, and show that our rendering results closely resemble these real film emulsions. In addition to realistic grain rendering, our resolution-free algorithm allows for any desired zoom factor, even down to the scale of the microscopic grains themselves

    Optimizing Image Compression via Joint Learning with Denoising

    Full text link
    High levels of noise usually exist in today's captured images due to the relatively small sensors equipped in the smartphone cameras, where the noise brings extra challenges to lossy image compression algorithms. Without the capacity to tell the difference between image details and noise, general image compression methods allocate additional bits to explicitly store the undesired image noise during compression and restore the unpleasant noisy image during decompression. Based on the observations, we optimize the image compression algorithm to be noise-aware as joint denoising and compression to resolve the bits misallocation problem. The key is to transform the original noisy images to noise-free bits by eliminating the undesired noise during compression, where the bits are later decompressed as clean images. Specifically, we propose a novel two-branch, weight-sharing architecture with plug-in feature denoisers to allow a simple and effective realization of the goal with little computational cost. Experimental results show that our method gains a significant improvement over the existing baseline methods on both the synthetic and real-world datasets. Our source code is available at https://github.com/felixcheng97/DenoiseCompression.Comment: Accepted to ECCV 202

    Encoding in the Dark Grand Challenge:An Overview

    Get PDF
    A big part of the video content we consume from video providers consists of genres featuring low-light aesthetics. Low light sequences have special characteristics, such as spatio-temporal varying acquisition noise and light flickering, that make the encoding process challenging. To deal with the spatio-temporal incoherent noise, higher bitrates are used to achieve high objective quality. Additionally, the quality assessment metrics and methods have not been designed, trained or tested for this type of content. This has inspired us to trigger research in that area and propose a Grand Challenge on encoding low-light video sequences. In this paper, we present an overview of the proposed challenge, and test state-of-the-art methods that will be part of the benchmark methods at the stage of the participants' deliverable assessment. From this exploration, our results show that VVC already achieves a high performance compared to simply denoising the video source prior to encoding. Moreover, the quality of the video streams can be further improved by employing a post-processing image enhancement method

    Artificial Intelligence in the Creative Industries: A Review

    Full text link
    This paper reviews the current state of the art in Artificial Intelligence (AI) technologies and applications in the context of the creative industries. A brief background of AI, and specifically Machine Learning (ML) algorithms, is provided including Convolutional Neural Network (CNNs), Generative Adversarial Networks (GANs), Recurrent Neural Networks (RNNs) and Deep Reinforcement Learning (DRL). We categorise creative applications into five groups related to how AI technologies are used: i) content creation, ii) information analysis, iii) content enhancement and post production workflows, iv) information extraction and enhancement, and v) data compression. We critically examine the successes and limitations of this rapidly advancing technology in each of these areas. We further differentiate between the use of AI as a creative tool and its potential as a creator in its own right. We foresee that, in the near future, machine learning-based AI will be adopted widely as a tool or collaborative assistant for creativity. In contrast, we observe that the successes of machine learning in domains with fewer constraints, where AI is the `creator', remain modest. The potential of AI (or its developers) to win awards for its original creations in competition with human creatives is also limited, based on contemporary technologies. We therefore conclude that, in the context of creative industries, maximum benefit from AI will be derived where its focus is human centric -- where it is designed to augment, rather than replace, human creativity

    Signal processing for high-definition television

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mathematics, 1995.Includes bibliographical references (p. 60-62).by Peter Monta.Ph.D

    Index to NASA Tech Briefs, January - June 1967

    Get PDF
    Technological innovations for January-June 1967, abstracts and subject inde

    Digital watermarking and novel security devices

    Get PDF
    EThOS - Electronic Theses Online ServiceGBUnited Kingdo
    • …
    corecore