5,747 research outputs found

    Semantic portrait color transfer with internet images

    Get PDF
    We present a novel color transfer method for portraits by exploring their high-level semantic information. First, a database is set up which consists of a collection of portrait images download from the Internet, and each of them is manually segmented using image matting as a preprocessing step. Second, we search the database using Face++ to find the images with similar poses to a given source portrait image, and choose one satisfactory image from the results as the target. Third, we extract portrait foregrounds from both source and target images. Then, the system extracts the semantic information, such as faces, eyes, eyebrows, lips, teeth, etc., from the extracted foreground of the source using image matting algorithms. After that, we perform color transfer between corresponding parts with the same semantic information. We get the final transferred result by seamlessly compositing different parts together using alpha blending. Experimental results show that our semantics-driven approach can generate better color transfer results for portraits than previous methods and provide users a new means to retouch their portraits

    Image-to-Image Translation with Conditional Adversarial Networks

    Full text link
    We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Indeed, since the release of the pix2pix software associated with this paper, a large number of internet users (many of them artists) have posted their own experiments with our system, further demonstrating its wide applicability and ease of adoption without the need for parameter tweaking. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without hand-engineering our loss functions either.Comment: Website: https://phillipi.github.io/pix2pix/, CVPR 201

    Hierarchy Composition GAN for High-fidelity Image Synthesis

    Full text link
    Despite the rapid progress of generative adversarial networks (GANs) in image synthesis in recent years, the existing image synthesis approaches work in either geometry domain or appearance domain alone which often introduces various synthesis artifacts. This paper presents an innovative Hierarchical Composition GAN (HIC-GAN) that incorporates image synthesis in geometry and appearance domains into an end-to-end trainable network and achieves superior synthesis realism in both domains simultaneously. We design an innovative hierarchical composition mechanism that is capable of learning realistic composition geometry and handling occlusions while multiple foreground objects are involved in image composition. In addition, we introduce a novel attention mask mechanism that guides to adapt the appearance of foreground objects which also helps to provide better training reference for learning in geometry domain. Extensive experiments on scene text image synthesis, portrait editing and indoor rendering tasks show that the proposed HIC-GAN achieves superior synthesis performance qualitatively and quantitatively.Comment: 11 pages, 8 figure
    • …
    corecore