22 research outputs found

    Markov Weight Fields for face sketch synthesis

    Get PDF
    Posters 1C - Vision for Graphics, Sensors, Medical, Vision for Robotics, ApplicationsGreat progress has been made in face sketch synthesis in recent years. State-of-the-art methods commonly apply a Markov Random Fields (MRF) model to select local sketch patches from a set of training data. Such methods, however, have two major drawbacks. Firstly, the MRF model used cannot synthesize new sketch patches. Secondly, the optimization problem in solving the MRF is NP-hard. In this paper, we propose a novel Markov Weight Fields (MWF) model that is capable of synthesizing new sketch patches. We formulate our model into a convex quadratic programming (QP) problem to which the optimal solution is guaranteed. Based on the Markov property of our model, we further propose a cascade decomposition method (CDM) for solving such a large scale QP problem efficiently. Experimental results on the CUHK face sketch database and celebrity photos show that our model outperforms the common MRF model used in other state-of-the-art methods. © 2012 IEEE.published_or_final_versionThe IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Providence, RI., 16-21 June 2012. In IEEE Conference on Computer Vision and Pattern Recognition Proceedings, 2012, p. 1091-109

    High-Quality Facial Photo-Sketch Synthesis Using Multi-Adversarial Networks

    Full text link
    Synthesizing face sketches from real photos and its inverse have many applications. However, photo/sketch synthesis remains a challenging problem due to the fact that photo and sketch have different characteristics. In this work, we consider this task as an image-to-image translation problem and explore the recently popular generative models (GANs) to generate high-quality realistic photos from sketches and sketches from photos. Recent GAN-based methods have shown promising results on image-to-image translation problems and photo-to-sketch synthesis in particular, however, they are known to have limited abilities in generating high-resolution realistic images. To this end, we propose a novel synthesis framework called Photo-Sketch Synthesis using Multi-Adversarial Networks, (PS2-MAN) that iteratively generates low resolution to high resolution images in an adversarial way. The hidden layers of the generator are supervised to first generate lower resolution images followed by implicit refinement in the network to generate higher resolution images. Furthermore, since photo-sketch synthesis is a coupled/paired translation problem, we leverage the pair information using CycleGAN framework. Both Image Quality Assessment (IQA) and Photo-Sketch Matching experiments are conducted to demonstrate the superior performance of our framework in comparison to existing state-of-the-art solutions. Code available at: https://github.com/lidan1/PhotoSketchMAN.Comment: Accepted by 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018)(Oral

    r-BTN: Cross-domain Face Composite and Synthesis from Limited Facial Patches

    Full text link
    We start by asking an interesting yet challenging question, "If an eyewitness can only recall the eye features of the suspect, such that the forensic artist can only produce a sketch of the eyes (e.g., the top-left sketch shown in Fig. 1), can advanced computer vision techniques help generate the whole face image?" A more generalized question is that if a large proportion (e.g., more than 50%) of the face/sketch is missing, can a realistic whole face sketch/image still be estimated. Existing face completion and generation methods either do not conduct domain transfer learning or can not handle large missing area. For example, the inpainting approach tends to blur the generated region when the missing area is large (i.e., more than 50%). In this paper, we exploit the potential of deep learning networks in filling large missing region (e.g., as high as 95% missing) and generating realistic faces with high-fidelity in cross domains. We propose the recursive generation by bidirectional transformation networks (r-BTN) that recursively generates a whole face/sketch from a small sketch/face patch. The large missing area and the cross domain challenge make it difficult to generate satisfactory results using a unidirectional cross-domain learning structure. On the other hand, a forward and backward bidirectional learning between the face and sketch domains would enable recursive estimation of the missing region in an incremental manner (Fig. 1) and yield appealing results. r-BTN also adopts an adversarial constraint to encourage the generation of realistic faces/sketches. Extensive experiments have been conducted to demonstrate the superior performance from r-BTN as compared to existing potential solutions.Comment: Accepted by AAAI 201
    corecore