2,323 research outputs found

    Deep Shape Matching

    Full text link
    We cast shape matching as metric learning with convolutional networks. We break the end-to-end process of image representation into two parts. Firstly, well established efficient methods are chosen to turn the images into edge maps. Secondly, the network is trained with edge maps of landmark images, which are automatically obtained by a structure-from-motion pipeline. The learned representation is evaluated on a range of different tasks, providing improvements on challenging cases of domain generalization, generic sketch-based image retrieval or its fine-grained counterpart. In contrast to other methods that learn a different model per task, object category, or domain, we use the same network throughout all our experiments, achieving state-of-the-art results in multiple benchmarks.Comment: ECCV 201

    Deep Learning for Free-Hand Sketch: A Survey

    Get PDF
    Free-hand sketches are highly illustrative, and have been widely used by humans to depict objects or stories from ancient times to the present. The recent prevalence of touchscreen devices has made sketch creation a much easier task than ever and consequently made sketch-oriented applications increasingly popular. The progress of deep learning has immensely benefited free-hand sketch research and applications. This paper presents a comprehensive survey of the deep learning techniques oriented at free-hand sketch data, and the applications that they enable. The main contents of this survey include: (i) A discussion of the intrinsic traits and unique challenges of free-hand sketch, to highlight the essential differences between sketch data and other data modalities, e.g., natural photos. (ii) A review of the developments of free-hand sketch research in the deep learning era, by surveying existing datasets, research topics, and the state-of-the-art methods through a detailed taxonomy and experimental evaluation. (iii) Promotion of future work via a discussion of bottlenecks, open problems, and potential research directions for the community.Comment: This paper is accepted by IEEE TPAM

    Sketch Me That Shoe

    Get PDF
    This project received support from the European Union’s Horizon 2020 research and innovation programme under grant agreement #640891, the Royal Society and Natural Science Foundation of China (NSFC) joint grant #IE141387 and #61511130081, and the China Scholarship Council (CSC). We gratefully acknowledge the support of NVIDIA Corporation for the donation of the GPUs used for this research

    Asymmetric Feature Maps with Application to Sketch Based Retrieval

    Full text link
    We propose a novel concept of asymmetric feature maps (AFM), which allows to evaluate multiple kernels between a query and database entries without increasing the memory requirements. To demonstrate the advantages of the AFM method, we derive a short vector image representation that, due to asymmetric feature maps, supports efficient scale and translation invariant sketch-based image retrieval. Unlike most of the short-code based retrieval systems, the proposed method provides the query localization in the retrieved image. The efficiency of the search is boosted by approximating a 2D translation search via trigonometric polynomial of scores by 1D projections. The projections are a special case of AFM. An order of magnitude speed-up is achieved compared to traditional trigonometric polynomials. The results are boosted by an image-based average query expansion, exceeding significantly the state of the art on standard benchmarks.Comment: CVPR 201

    SketchZooms: Deep Multi-view Descriptors for Matching Line Drawings

    Get PDF
    Finding point-wise correspondences between images is a long-standing problem in image analysis. This becomes particularly challenging for sketch images, due to the varying nature of human drawing style, projection distortions and viewport changes. In this paper, we present the first attempt to obtain a learned descriptor for dense registration in line drawings. Based on recent deep learning techniques for corresponding photographs, we designed descriptors to locally match image pairs where the object of interest belongs to the same semantic category, yet still differ drastically in shape, form, and projection angle. To this end, we have specifically crafted a data set of synthetic sketches using non-photorealistic rendering over a large collection of part-based registered 3D models. After training, a neural network generates descriptors for every pixel in an input image, which are shown togeneralize correctly in unseen sketches hand-drawn by humans. We evaluate our method against a baseline of correspondences data collected from expert designers, in addition to comparisons with other descriptors that have been proven effective in sketches. Code, data and further resources will be publicly released by the time of publication.Fil: Navarro, Jose Pablo. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Centro Nacional Patagónico. Instituto Patagónico de Ciencias Sociales y Humanas; Argentina. Universidad Nacional de la Patagonia "San Juan Bosco". Facultad de Ingeniería - Sede Puerto Madryn. Departamento de Informática; ArgentinaFil: Orlando, José Ignacio. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Tandil; Argentina. Universidad Nacional del Centro de la Provincia de Buenos Aires. Facultad de Ciencias Exactas. Grupo de Plasmas Densos Magnetizados. Provincia de Buenos Aires. Gobernación. Comisión de Investigaciones Científicas. Grupo de Plasmas Densos Magnetizados; ArgentinaFil: Delrieux, Claudio Augusto. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Bahía Blanca; Argentina. Universidad Nacional del Sur. Departamento de Ingeniería Eléctrica y de Computadoras; ArgentinaFil: Iarussi, Emmanuel. Consejo Nacional de Investigaciones Científicas y Técnicas; Argentina. Universidad Tecnológica Nacional. Facultad Regional Buenos Aires; Argentin
    • …
    corecore