19 research outputs found

    RecolorNeRF: Layer Decomposed Radiance Fields for Efficient Color Editing of 3D Scenes

    Full text link
    Radiance fields have gradually become a main representation of media. Although its appearance editing has been studied, how to achieve view-consistent recoloring in an efficient manner is still under explored. We present RecolorNeRF, a novel user-friendly color editing approach for the neural radiance fields. Our key idea is to decompose the scene into a set of pure-colored layers, forming a palette. By this means, color manipulation can be conducted by altering the color components of the palette directly. To support efficient palette-based editing, the color of each layer needs to be as representative as possible. In the end, the problem is formulated as an optimization problem, where the layers and their blending weights are jointly optimized with the NeRF itself. Extensive experiments show that our jointly-optimized layer decomposition can be used against multiple backbones and produce photo-realistic recolored novel-view renderings. We demonstrate that RecolorNeRF outperforms baseline methods both quantitatively and qualitatively for color editing even in complex real-world scenes.Comment: To appear in ACM Multimedia 2023. Project website is accessible at https://sites.google.com/view/recolorner

    Color Recommendation for Vector Graphic Documents based on Multi-Palette Representation

    Full text link
    Vector graphic documents present multiple visual elements, such as images, shapes, and texts. Choosing appropriate colors for multiple visual elements is a difficult but crucial task for both amateurs and professional designers. Instead of creating a single color palette for all elements, we extract multiple color palettes from each visual element in a graphic document, and then combine them into a color sequence. We propose a masked color model for color sequence completion and recommend the specified colors based on color context in multi-palette with high probability. We train the model and build a color recommendation system on a large-scale dataset of vector graphic documents. The proposed color recommendation method outperformed other state-of-the-art methods by both quantitative and qualitative evaluations on color prediction and our color recommendation system received positive feedback from professional designers in an interview study.Comment: Accepted to WACV 202

    Text-guided Image-and-Shape Editing and Generation: A Short Survey

    Full text link
    Image and shape editing are ubiquitous among digital artworks. Graphics algorithms facilitate artists and designers to achieve desired editing intents without going through manually tedious retouching. In the recent advance of machine learning, artists' editing intents can even be driven by text, using a variety of well-trained neural networks. They have seen to be receiving an extensive success on such as generating photorealistic images, artworks and human poses, stylizing meshes from text, or auto-completion given image and shape priors. In this short survey, we provide an overview over 50 papers on state-of-the-art (text-guided) image-and-shape generation techniques. We start with an overview on recent editing algorithms in the introduction. Then, we provide a comprehensive review on text-guided editing techniques for 2D and 3D independently, where each of its sub-section begins with a brief background introduction. We also contextualize editing algorithms under recent implicit neural representations. Finally, we conclude the survey with the discussion over existing methods and potential research ideas.Comment: 10 page

    Image Color Correction, Enhancement, and Editing

    Get PDF
    This thesis presents methods and approaches to image color correction, color enhancement, and color editing. To begin, we study the color correction problem from the standpoint of the camera's image signal processor (ISP). A camera's ISP is hardware that applies a series of in-camera image processing and color manipulation steps, many of which are nonlinear in nature, to render the initial sensor image to its final photo-finished representation saved in the 8-bit standard RGB (sRGB) color space. As white balance (WB) is one of the major procedures applied by the ISP for color correction, this thesis presents two different methods for ISP white balancing. Afterwards, we discuss another scenario of correcting and editing image colors, where we present a set of methods to correct and edit WB settings for images that have been improperly white-balanced by the ISP. Then, we explore another factor that has a significant impact on the quality of camera-rendered colors, in which we outline two different methods to correct exposure errors in camera-rendered images. Lastly, we discuss post-capture auto color editing and manipulation. In particular, we propose auto image recoloring methods to generate different realistic versions of the same camera-rendered image with new colors. Through extensive evaluations, we demonstrate that our methods provide superior solutions compared to existing alternatives targeting color correction, color enhancement, and color editing

    27th Annual European Symposium on Algorithms: ESA 2019, September 9-11, 2019, Munich/Garching, Germany

    Get PDF

    Parallel iterative solvers for real-time elastic deformations

    Get PDF
    Physics-based animation of elastic materials allows to simulate dynamic deformable objects such as fabrics, human tissue, hair, etc. Due to their complex inner mechanical behaviour, it is difficult to replicate their motions interactively and accurately at the same time. This course introduces students and practitioners to several parallel iterative techniques to tackle this problem and achieve elastic deformations in real-time. We focus on techniques for applications such as video games and interactive design, with\ua0fixed and small hard time budgets\ua0available for physically-based animation, and where responsiveness and stability are often more important than accuracy, as long as the results are believable. The course focuses on solvers able to fully exploit the computational capabilities of modern GPU architectures, effectively solving systems of hundreds of thousands of nonlinear equations in a matter of few milliseconds. The course introduces the basic concepts concerning physics-based elastic objects, and provide an overview of the different types of numerical solvers available in the literature. Then, we show how some variants of traditional solvers can address real-time animation and assess them in terms of accuracy, robustness and performance. Practical examples are provided throughout the course, in particular how to apply the depicted solvers to Projective Dynamics and Position-based Dynamics, two recent and popular physics models for elastic materials

    Synthetic image generation and the use of virtual environments for image enhancement tasks

    Get PDF
    Deep learning networks are often difficult to train if there are insufficient image samples. Gathering real-world images tailored for a specific job takes a lot of work to perform. This dissertation explores techniques for synthetic image generation and virtual environments for various image enhancement/ correction/restoration tasks, specifically distortion correction, dehazing, shadow removal, and intrinsic image decomposition. First, given various image formation equations, such as those used in distortion correction and dehazing, synthetic image samples can be produced, provided that the equation is well-posed. Second, using virtual environments to train various image models is applicable for simulating real-world effects that are otherwise difficult to gather or replicate, such as dehazing and shadow removal. Given synthetic images, one cannot train a network directly on it as there is a possible gap between the synthetic and real domains. We have devised several techniques for generating synthetic images and formulated domain adaptation methods where our trained deep-learning networks perform competitively in distortion correction, dehazing, and shadow removal. Additional studies and directions are provided for the intrinsic image decomposition problem and the exploration of procedural content generation, where a virtual Philippine city was created as an initial prototype. Keywords: image generation, image correction, image dehazing, shadow removal, intrinsic image decomposition, computer graphics, rendering, machine learning, neural networks, domain adaptation, procedural content generation
    corecore