136 research outputs found

    High-Quality Facial Photo-Sketch Synthesis Using Multi-Adversarial Networks

    Full text link
    Synthesizing face sketches from real photos and its inverse have many applications. However, photo/sketch synthesis remains a challenging problem due to the fact that photo and sketch have different characteristics. In this work, we consider this task as an image-to-image translation problem and explore the recently popular generative models (GANs) to generate high-quality realistic photos from sketches and sketches from photos. Recent GAN-based methods have shown promising results on image-to-image translation problems and photo-to-sketch synthesis in particular, however, they are known to have limited abilities in generating high-resolution realistic images. To this end, we propose a novel synthesis framework called Photo-Sketch Synthesis using Multi-Adversarial Networks, (PS2-MAN) that iteratively generates low resolution to high resolution images in an adversarial way. The hidden layers of the generator are supervised to first generate lower resolution images followed by implicit refinement in the network to generate higher resolution images. Furthermore, since photo-sketch synthesis is a coupled/paired translation problem, we leverage the pair information using CycleGAN framework. Both Image Quality Assessment (IQA) and Photo-Sketch Matching experiments are conducted to demonstrate the superior performance of our framework in comparison to existing state-of-the-art solutions. Code available at: https://github.com/lidan1/PhotoSketchMAN.Comment: Accepted by 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018)(Oral

    Fast Preprocessing for Robust Face Sketch Synthesis

    Full text link
    Exemplar-based face sketch synthesis methods usually meet the challenging problem that input photos are captured in different lighting conditions from training photos. The critical step causing the failure is the search of similar patch candidates for an input photo patch. Conventional illumination invariant patch distances are adopted rather than directly relying on pixel intensity difference, but they will fail when local contrast within a patch changes. In this paper, we propose a fast preprocessing method named Bidirectional Luminance Remapping (BLR), which interactively adjust the lighting of training and input photos. Our method can be directly integrated into state-of-the-art exemplar-based methods to improve their robustness with ignorable computational cost.Comment: IJCAI 2017. Project page: http://www.cs.cityu.edu.hk/~yibisong/ijcai17_sketch/index.htm

    Sketch Plus Colorization Deep Convolutional Neural Networks for Photos Generation from Sketches

    Get PDF
    In this paper, we introduce a method to generate photos from sketches using Deep Convolutional Neural Networks (DCNN). This research proposes a method by combining a network to invert sketches into photos (sketch inversion net) with a network to predict color given grayscale images (colorization net). By using this method, the quality of generated photos is expected to be more similar to the actual photos. We first artificially constructed uncontrolled conditions for the dataset. The dataset, which consists of hand-drawn sketches and their corresponding photos, were pre-processed using several data augmentation techniques to train the models in addressing the issues of rotation, scaling, shape, noise, and positioning. Validation was measured using two types of similarity measurements: pixel- difference based and human visual system (HVS) which mimics human perception in evaluating the quality of an image. The pixel- difference based metric consists of Mean Squared Error (MSE) and Peak Signal-to-Noise Ratio (PSNR) while the HVS consists of Universal Image Quality Index (UIQI) and Structural Similarity (SSIM). Our method gives the best quality of generated photos for all measures (844.04 for MSE, 19.06 for PSNR, 0.47 for UIQI, and 0.66 for SSIM)

    Improved Sketch-to-Photo Generation Using Filter Aided Generative Adversarial Network

    Get PDF
    Generating a photographic face image from given input sketch is most challenging task in computer vision. Mainly the sketches drawn by sketch artist used in human identification. Sketch to photo synthesis is very important applications in law enforcement as well as character design, educational training. In recent years Generative Adversarial Network (GAN) shows excellent performance on sketch to photo synthesis problem.  Quality of hand drawn sketches affects the quality generated photo. It might be possible that while handling the hand drawn sketches, accidently by touching the user hand on pencil sketch or similar activities causes noise in given sketch. Likewise different styles like shading, darkness of pencil used by sketch artist may cause unnecessary noise in sketches. In recent year many sketches to photo synthesis methods are proposed, but they are mainly focused on network architecture to get better performance. In this paper we proposed Filter-aided GAN framework to remove such noise while synthesizing photo images from hand drawn sketches. Here we implement and compare different filtering methods with GAN.  Quantitative and qualitative result shows that proposed Filter-aided GAN generate the photo images which are visually pleasant and closer to ground truth image

    Recent Advances in Transfer Learning for Cross-Dataset Visual Recognition: A Problem-Oriented Perspective

    Get PDF
    This paper takes a problem-oriented perspective and presents a comprehensive review of transfer learning methods, both shallow and deep, for cross-dataset visual recognition. Specifically, it categorises the cross-dataset recognition into seventeen problems based on a set of carefully chosen data and label attributes. Such a problem-oriented taxonomy has allowed us to examine how different transfer learning approaches tackle each problem and how well each problem has been researched to date. The comprehensive problem-oriented review of the advances in transfer learning with respect to the problem has not only revealed the challenges in transfer learning for visual recognition, but also the problems (e.g. eight of the seventeen problems) that have been scarcely studied. This survey not only presents an up-to-date technical review for researchers, but also a systematic approach and a reference for a machine learning practitioner to categorise a real problem and to look up for a possible solution accordingly
    • …
    corecore