1,292 research outputs found

    Exploiting artistic cues to obtain line labels for free-hand sketches

    Get PDF
    Artistic cues help designers to communicate design intent in sketches. In this paper, we show how these artistic cues may be used to obtain a line labelling interpretation of freehand sketches, using a cue-based genetic algorithm to obtain a labelling solution that matches design intent. In the paper, we show how this can be achieved from off-line or paper based sketches, thereby allowing designers greater flexibility in the choice of sketching medium.peer-reviewe

    An evolutionary approach to determining hidden lines from a natural sketch

    Get PDF
    This paper focuses on the identification of hidden lines and junctions from natural sketches of drawings that exhibit an extended-trihedral geometry. Identification of hidden lines and junctions is essential in the creation of a complete 3D model of the sketched object, allowing the interpretation algorithms to infer what the unsketched back of the object should look like. This approach first labels the sketched visible edges of the object with a geometric edge label, obtaining a labelled junction at each of the visible junctions of the object. Using a dictionary of junctions with visible and hidden edges, these labelled visible junctions are then used to deduce the edge interpretation and orientation of some of the hidden edges. A genetic algorithm is used to combine these hidden edges into hidden junctions, evolving the representation of the hidden edges and junctions until a feasible hidden view representation of the object is obtained.peer-reviewe

    An evolutionary approach to determining hidden lines from a natural sketch

    Full text link

    A circle-based vectorization algorithm for drawings with shadows

    Get PDF
    This work is funded by the University of Malta, under the research grant SCERP02-03.Vectorization algorithms described in the literature assume that the drawings being vectorized are either binary images or have a clear white background. Sketches of artistic objects however also contain shadows which help the artist to portray intent, particularly in potentially ambiguous sketches. Such sketches are difficult to binarise since the shading strokes make these sketches non bimodal. For this reason, we describe a circle-based vectorization algorithm that uses signatures obtained from sample points on the line strokes to identify and vectorize the line strokes in the sketch. We show that the proposed algorithm performs as well as other vectorization techniques described in the literature, despite the shadows present in the sketch.peer-reviewe

    A circle-based vectorization algorithm for drawings with shadows

    Full text link

    Sketch-a-Net: A Deep Neural Network that Beats Humans

    Get PDF
    This Project received support from the European Union’s Horizon 2020 Research and Innovation Programme under Grant Agreement #640891, and the Royal Society and Natural Science Foundation of China (NSFC) Joint Grant #IE141387 and #61511130081. We gratefully acknowledge the support of NVIDIA Corporation for the donation of the GPUs used for this research

    Face Hallucination via Deep Neural Networks.

    Get PDF
    We firstly address aligned low-resolution (LR) face images (i.e. 16X16 pixels) by designing a discriminative generative network, named URDGN. URDGN is composed of two networks: a generative model and a discriminative model. We introduce a pixel-wise L2 regularization term to the generative model and exploit the feedback of the discriminative network to make the upsampled face images more similar to real ones. We present an end-to-end transformative discriminative neural network (TDN) devised for super-resolving unaligned tiny face images. TDN embeds spatial transformation layers to enforce local receptive fields to line-up with similar spatial supports. To upsample noisy unaligned LR face images, we propose decoder-encoder-decoder networks. A transformative discriminative decoder network is employed to upsample and denoise LR inputs simultaneously. Then we project the intermediate HR faces to aligned and noise-free LR faces by a transformative encoder network. Finally, high-quality hallucinated HR images are generated by our second decoder. Furthermore, we present an end-to-end multiscale transformative discriminative neural network (MTDN) to super-resolve unaligned LR face images of different resolutions in a unified framework. We propose a method that explicitly incorporates structural information of faces into the face super-resolution process by using a multi-task convolutional neural network (CNN). Our method not only uses low-level information (i.e. intensity similarity), but also middle-level information (i.e. face structure) to further explore spatial constraints of facial components from LR inputs images. We demonstrate that supplementing residual images or feature maps with additional facial attribute information can significantly reduce the ambiguity in face super-resolution. To explore this idea, we develop an attribute-embedded upsampling network. In this manner, our method is able to super-resolve LR faces by a large upscaling factor while reducing the uncertainty of one-to-many mappings remarkably. We further push the boundaries of hallucinating a tiny, non-frontal face image to understand how much of this is possible by leveraging the availability of large datasets and deep networks. To this end, we introduce a novel Transformative Adversarial Neural Network (TANN) to jointly frontalize very LR out-of-plane rotated face images (including profile views) and aggressively super-resolve them by 8X, regardless of their original poses and without using any 3D information. Besides recovering an HR face images from an LR version, this thesis also addresses the task of restoring realistic faces from stylized portrait images, which can also be regarded as face hallucination

    Deep Learning for Free-Hand Sketch: A Survey

    Get PDF
    Free-hand sketches are highly illustrative, and have been widely used by humans to depict objects or stories from ancient times to the present. The recent prevalence of touchscreen devices has made sketch creation a much easier task than ever and consequently made sketch-oriented applications increasingly popular. The progress of deep learning has immensely benefited free-hand sketch research and applications. This paper presents a comprehensive survey of the deep learning techniques oriented at free-hand sketch data, and the applications that they enable. The main contents of this survey include: (i) A discussion of the intrinsic traits and unique challenges of free-hand sketch, to highlight the essential differences between sketch data and other data modalities, e.g., natural photos. (ii) A review of the developments of free-hand sketch research in the deep learning era, by surveying existing datasets, research topics, and the state-of-the-art methods through a detailed taxonomy and experimental evaluation. (iii) Promotion of future work via a discussion of bottlenecks, open problems, and potential research directions for the community.Comment: This paper is accepted by IEEE TPAM
    • …
    corecore