5,527 research outputs found

    Intelligent computational sketching support for conceptual design

    Get PDF
    Sketches, with their flexibility and suggestiveness, are in many ways ideal for expressing emerging design concepts. This can be seen from the fact that the process of representing early designs by free-hand drawings was used as far back as in the early 15th century [1]. On the other hand, CAD systems have become widely accepted as an essential design tool in recent years, not least because they provide a base on which design analysis can be carried out. Efficient transfer of sketches into a CAD representation, therefore, is a powerful addition to the designers' armoury.It has been pointed out by many that a pen-on-paper system is the best tool for sketching. One of the crucial requirements of a computer aided sketching system is its ability to recognise and interpret the elements of sketches. 'Sketch recognition', as it has come to be known, has been widely studied by people working in such fields: as artificial intelligence to human-computer interaction and robotic vision. Despite the continuing efforts to solve the problem of appropriate conceptual design modelling, it is difficult to achieve completely accurate recognition of sketches because usually sketches implicate vague information, and the idiosyncratic expression and understanding differ from each designer

    Structured Knowledge Representation for Image Retrieval

    Full text link
    We propose a structured approach to the problem of retrieval of images by content and present a description logic that has been devised for the semantic indexing and retrieval of images containing complex objects. As other approaches do, we start from low-level features extracted with image analysis to detect and characterize regions in an image. However, in contrast with feature-based approaches, we provide a syntax to describe segmented regions as basic objects and complex objects as compositions of basic ones. Then we introduce a companion extensional semantics for defining reasoning services, such as retrieval, classification, and subsumption. These services can be used for both exact and approximate matching, using similarity measures. Using our logical approach as a formal specification, we implemented a complete client-server image retrieval system, which allows a user to pose both queries by sketch and queries by example. A set of experiments has been carried out on a testbed of images to assess the retrieval capabilities of the system in comparison with expert users ranking. Results are presented adopting a well-established measure of quality borrowed from textual information retrieval

    Data-Driven Shape Analysis and Processing

    Full text link
    Data-driven methods play an increasingly important role in discovering geometric, structural, and semantic relationships between 3D shapes in collections, and applying this analysis to support intelligent modeling, editing, and visualization of geometric data. In contrast to traditional approaches, a key feature of data-driven approaches is that they aggregate information from a collection of shapes to improve the analysis and processing of individual shapes. In addition, they are able to learn models that reason about properties and relationships of shapes without relying on hard-coded rules or explicitly programmed instructions. We provide an overview of the main concepts and components of these techniques, and discuss their application to shape classification, segmentation, matching, reconstruction, modeling and exploration, as well as scene analysis and synthesis, through reviewing the literature and relating the existing works with both qualitative and numerical comparisons. We conclude our report with ideas that can inspire future research in data-driven shape analysis and processing.Comment: 10 pages, 19 figure

    Where and Who? Automatic Semantic-Aware Person Composition

    Full text link
    Image compositing is a method used to generate realistic yet fake imagery by inserting contents from one image to another. Previous work in compositing has focused on improving appearance compatibility of a user selected foreground segment and a background image (i.e. color and illumination consistency). In this work, we instead develop a fully automated compositing model that additionally learns to select and transform compatible foreground segments from a large collection given only an input image background. To simplify the task, we restrict our problem by focusing on human instance composition, because human segments exhibit strong correlations with their background and because of the availability of large annotated data. We develop a novel branching Convolutional Neural Network (CNN) that jointly predicts candidate person locations given a background image. We then use pre-trained deep feature representations to retrieve person instances from a large segment database. Experimental results show that our model can generate composite images that look visually convincing. We also develop a user interface to demonstrate the potential application of our method.Comment: 10 pages, 9 figure

    Learning visual similarities robust to bias

    Get PDF

    Matching software-generated sketches to face photographs with a very deep CNN, morphed faces, and transfer learning

    Get PDF
    Sketches obtained from eyewitness descriptions of criminals have proven to be useful in apprehending criminals, particularly when there is a lack of evidence. Automated methods to identify subjects depicted in sketches have been proposed in the literature, but their performance is still unsatisfactory when using software-generated sketches and when tested using extensive galleries with a large amount of subjects. Despite the success of deep learning in several applications including face recognition, little work has been done in applying it for face photograph-sketch recognition. This is mainly a consequence of the need to ensure robust training of deep networks by using a large number of images, yet limited quantities are publicly available. Moreover, most algorithms have not been designed to operate on software-generated face composite sketches which are used by numerous law enforcement agencies worldwide. This paper aims to tackle these issues with the following contributions: 1) a very deep convolutional neural network is utilised to determine the identity of a subject in a composite sketch by comparing it to face photographs and is trained by applying transfer learning to a state-of-the-art model pretrained for face photograph recognition; 2) a 3-D morphable model is used to synthesise both photographs and sketches to augment the available training data, an approach that is shown to significantly aid performance; and 3) the UoM-SGFS database is extended to contain twice the number of subjects, now having 1200 sketches of 600 subjects. An extensive evaluation of popular and stateof-the-art algorithms is also performed due to the lack of such information in the literature, where it is demonstrated that the proposed approach comprehensively outperforms state-of-the-art methods on all publicly available composite sketch datasets.peer-reviewe

    Barcode Annotations for Medical Image Retrieval: A Preliminary Investigation

    Full text link
    This paper proposes to generate and to use barcodes to annotate medical images and/or their regions of interest such as organs, tumors and tissue types. A multitude of efficient feature-based image retrieval methods already exist that can assign a query image to a certain image class. Visual annotations may help to increase the retrieval accuracy if combined with existing feature-based classification paradigms. Whereas with annotations we usually mean textual descriptions, in this paper barcode annotations are proposed. In particular, Radon barcodes (RBC) are introduced. As well, local binary patterns (LBP) and local Radon binary patterns (LRBP) are implemented as barcodes. The IRMA x-ray dataset with 12,677 training images and 1,733 test images is used to verify how barcodes could facilitate image retrieval.Comment: To be published in proceedings of The IEEE International Conference on Image Processing (ICIP 2015), September 27-30, 2015, Quebec City, Canad
    • …
    corecore