16 research outputs found

    Image preprocessing for artistic robotic painting

    Get PDF
    Artistic robotic painting implies creating a picture on canvas according to a brushstroke map preliminarily computed from a source image. To make the painting look closer to the human artwork, the source image should be preprocessed to render the effects usually created by artists. In this paper, we consider three preprocessing effects: aerial perspective, gamut compression and brushstroke coherence. We propose an algorithm for aerial perspective amplification based on principles of light scattering using a depth map, an algorithm for gamut compression using nonlinear hue transformation and an algorithm for image gradient filtering for obtaining a well-coherent brushstroke map with a reduced number of brushstrokes, required for practical robotic painting. The described algorithms allow interactive image correction and make the final rendering look closer to a manually painted artwork. To illustrate our proposals, we render several test images on a computer and paint a monochromatic image on canvas with a painting robot

    Painterly rendering techniques: A state-of-the-art review of current approaches

    Get PDF
    In this publication we will look at the different methods presented over the past few decades which attempt to recreate digital paintings. While previous surveys concentrate on the broader subject of non-photorealistic rendering, the focus of this paper is firmly placed on painterly rendering techniques. We compare different methods used to produce different output painting styles such as abstract, colour pencil, watercolour, oriental, oil and pastel. Whereas some methods demand a high level of interaction using a skilled artist, others require simple parameters provided by a user with little or no artistic experience. Many methods attempt to provide more automation with the use of varying forms of reference data. This reference data can range from still photographs, video, 3D polygonal meshes or even 3D point clouds. The techniques presented here endeavour to provide tools and styles that are not traditionally available to an artist. Copyright © 2012 John Wiley & Sons, Ltd

    Intuitive, Interactive Beard and Hair Synthesis with Generative Models

    Full text link
    We present an interactive approach to synthesizing realistic variations in facial hair in images, ranging from subtle edits to existing hair to the addition of complex and challenging hair in images of clean-shaven subjects. To circumvent the tedious and computationally expensive tasks of modeling, rendering and compositing the 3D geometry of the target hairstyle using the traditional graphics pipeline, we employ a neural network pipeline that synthesizes realistic and detailed images of facial hair directly in the target image in under one second. The synthesis is controlled by simple and sparse guide strokes from the user defining the general structural and color properties of the target hairstyle. We qualitatively and quantitatively evaluate our chosen method compared to several alternative approaches. We show compelling interactive editing results with a prototype user interface that allows novice users to progressively refine the generated image to match their desired hairstyle, and demonstrate that our approach also allows for flexible and high-fidelity scalp hair synthesis.Comment: To be presented in the 2020 Conference on Computer Vision and Pattern Recognition (CVPR 2020, Oral Presentation). Supplementary video can be seen at: https://www.youtube.com/watch?v=v4qOtBATrv

    CSISE: cloud-based semantic image search engine

    Get PDF
    Title from PDF of title page, viewed on March 27, 2014Thesis advisor: Yugyung LeeVitaIncludes bibliographical references (pages 53-56)Thesis (M. S.)--School of Computing and Engineering. University of Missouri--Kansas City, 2013Due to rapid exponential growth in data, a couple of challenges we face today are how to handle big data and analyze large data sets. An IBM study showed the amount of data created in the last two years alone is 90% of the data in the world today. We have especially seen the exponential growth of images on the Web, e.g., more than 6 billion in Flickr, 1.5 billion in Google image engine, and more than 1 billon images in Instagram [1]. Since big data are not only a matter of a size, but are also heterogeneous types and sources of data, image searching with big data may not be scalable in practical settings. We envision Cloud computing as a new way to transform the big data challenge into a great opportunity. In this thesis, we intend to perform an efficient and accurate classification of a large collection of images using Cloud computing, which in turn supports semantic image searching. A novel approach with enhanced accuracy has been proposed to utilize semantic technology to classify images by analyzing both metadata and image data types. A two-level classification model was designed (i) semantic classification was performed on a metadata of images using TF-IDF, and (ii) image classification was performed using a hybrid image processing model combined with Euclidean distance and SURF FLANN measurements. A Cloud-based Semantic Image Search Engine (CSISE) is also developed to search an image using the proposed semantic model with the dynamic image repository by connecting online image search engines that include Google Image Search, Flickr, and Picasa. A series of experiments have been performed in a large-scale Hadoop environment using IBM's cloud on over half a million logo images of 76 types. The experimental results show that the performance of the CSISE engine (based on the proposed method) is comparable to the popular online image search engines as well as accurate with a higher rate (average precision of 71%) than existing approachesAbstract -- Contents -- Illustrations -- Tables -- Acknowledgements - Introduction -- Related work -- Cloud-based semantic image search engine model -- Cloud-based semantic image search engine (CSISE) implementation -- Experimental results and evaluation -- Conclusion and future work - Reference

    Personalized Food Printing for Portrait Images

    Get PDF
    The recent development of 3D printing techniques enables novel applications in customized food fabrication. Based on a tailor-made 3D food printer, we present a novel personalized food printing framework driven by portrait images. Unlike common 3D printers equipped with materials such as ABS, Nylon and SLA, our printer utilizes edible materials such as maltose, chocolate syrup, jam to print customized patterns. Our framework automatically converts an arbitrary input image into an optimized printable path to facilitate food printing, while preserving the prominent features of the image. This is achieved based on two key stages. First, we apply image abstraction techniques to extract salient image features. Robust face detection and sketch synthesis are optionally involved to enhance face features for portrait images. Second, we present a novel path optimization algorithm to generate printing path for efficient and feature-preserving food printing. We demonstrate the efficiency and efficacy of our framework using a variety of images and also a comparison with non-optimized results

    Abstract Art by Shape Classification

    Full text link

    Temporally Coherent Video Stylization

    Get PDF
    International audienceThe transformation of video clips into stylized animations remains an active research topic in Computer Graphics. A key challenge is to reproduce the look of traditional artistic styles whilst minimizing distracting flickering and sliding artifacts; i.e. with temporal coherence. This chapter surveys the spectrum of available video stylization techniques, focusing on algorithms encouraging the temporally coherent placement of rendering marks, and discusses the trade-offs necessary to achieve coherence. We begin with flow-based adaptations of stroke based rendering (SBR) and texture advection capable of painting video. We then chart the development of the field, and its fusion with Computer Vision, to deliver coherent mid-level scene representations. These representations enable the rotoscoping of rendering marks on to temporally coherent video regions, enhancing the diversity and temporal coherence of stylization. In discussing coherence, we formalize the problem of temporal coherence in terms of three defined criteria, and compare and contrast video stylization using these

    Edge-enhancing image smoothing.

    Get PDF
    Xu, Yi.Thesis (M.Phil.)--Chinese University of Hong Kong, 2011.Includes bibliographical references (p. 62-69).Abstracts in English and Chinese.Chapter 1 --- Introduction --- p.1Chapter 1.1 --- Organization --- p.4Chapter 2 --- Background and Motivation --- p.7Chapter 2.1 --- ID Mondrian Smoothing --- p.9Chapter 2.2 --- 2D Formulation --- p.13Chapter 3 --- Solver --- p.16Chapter 3.1 --- More Analysis --- p.20Chapter 4 --- Edge Extraction --- p.26Chapter 4.1 --- Related work --- p.26Chapter 4.2 --- Method and Results --- p.28Chapter 4.3 --- Summary --- p.32Chapter 5 --- Image Abstraction and Pencil Sketching --- p.35Chapter 5.1 --- Related Work --- p.35Chapter 5.2 --- Method and Results --- p.36Chapter 5.3 --- Summary --- p.40Chapter 6 --- Clip-Art Compression Artifact Removal --- p.41Chapter 6.1 --- Related work --- p.41Chapter 6.2 --- Method and Results --- p.43Chapter 6.3 --- Summary --- p.46Chapter 7 --- Layer-Based Contrast Manipulation --- p.49Chapter 7.1 --- Related Work --- p.49Chapter 7.2 --- Method and Results --- p.50Chapter 7.2.1 --- Edge Adjustment --- p.51Chapter 7.2.2 --- Detail Magnification --- p.54Chapter 7.2.3 --- Tone Mapping --- p.55Chapter 7.3 --- Summary --- p.56Chapter 8 --- Conclusion and Discussion --- p.59Bibliography --- p.6
    corecore