180 research outputs found

    Exploring a Parameterized Portrait Painting Space

    Get PDF
    We overview our interdisciplinary work building parameterized knowledge domains and their authoring tools that allow for expression systems which move through a space of painterly portraiture. With new computational systems it is possible to conceptually dance, compose and paint in higher level conceptual spaces. We are interested in building art systems that support exploring these spaces and in particular report on our software-based artistic toolkit and resulting experiments using parameter spaces in face based new media portraiture. This system allows us to parameterize the open cognitive and vision-based methodology that human artists have intuitively evolved over centuries into a domain toolkit to explore aesthetic realizations and interdisciplinary questions about the act of portrait painting as well as the general creative process. These experiments and questions can be explored by traditional and new media artists, art historians, cognitive scientists and other scholars

    Knowledge based approach to modeling portrait painting methodology

    Get PDF
    Traditional portrait artists use a specific but open human vision methodology to create a painterly portrait of a live or photographed sitter. Portrait artists attempt to simplify, compose and leave out what is irrelevant, emphasizing what is important. While seemingly a qualitative pursuit, artists use known but open techniques such as relying on source tone over colour to indirect into a colour temperature model, using "sharpness" to create a centre of interest, using edges to move the viewers gaze, and other techniques to filter and emphasize. Our interdisciplinary work attempts to compile and incorporate this portrait painter knowledge into a multi-space parameterized system that can create an array of painterly rendering output

    Higher level techniques for the artistic rendering of images and video

    Get PDF
    EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Genetic Paint: A Search for Salient Paintings

    Full text link

    The Evaluation of Stylized Facial Expressions

    No full text
    Stylized rendering aims to abstract information in an image making it useful not only for artistic but also for visualization purposes. Recent advances in computer graphics techniques have made it possible to render many varieties of stylized imagery efficiently. So far, however, few attempts have been made to characterize the perceptual impact and effectiveness of stylization. In this paper, we report several experiments that evaluate three different stylization techniques in the context of dynamic facial expressions. Going beyond the usual questionnaire approach, the experiments compare the techniques according to several criteria ranging from introspective measures (subjective preference) to task-dependent measures (recognizability, intensity). Our results shed light on how stylization of image contents affects the perception and subjective evaluation of facial expressions

    Supervised Genetic Search for Parameter Selection in Painterly Rendering

    Full text link

    Importance-Driven Composition of Multiple Rendering Styles

    Get PDF
    International audienceWe introduce a non-uniform composition that integrates multiple rendering styles in a picture driven by an importance map. This map, either issued from saliency estimation or designed by a user, is introduced both in the creation of the multiple styles and in the final composition. Our approach accommodates a variety of stylization techniques, such as color desaturation, line drawing, blurring, edge-preserving smoothing and enhancement. We illustrate the versatility of the proposed approach and the variety of rendering styles on different applications such as images, videos, 3D scenes and even mixed reality. We also demonstrate that such an approach may help in directing user attention

    Colour videos with depth : acquisition, processing and evaluation

    Get PDF
    The human visual system lets us perceive the world around us in three dimensions by integrating evidence from depth cues into a coherent visual model of the world. The equivalent in computer vision and computer graphics are geometric models, which provide a wealth of information about represented objects, such as depth and surface normals. Videos do not contain this information, but only provide per-pixel colour information. In this dissertation, I hence investigate a combination of videos and geometric models: videos with per-pixel depth (also known as RGBZ videos). I consider the full life cycle of these videos: from their acquisition, via filtering and processing, to stereoscopic display. I propose two approaches to capture videos with depth. The first is a spatiotemporal stereo matching approach based on the dual-cross-bilateral grid – a novel real-time technique derived by accelerating a reformulation of an existing stereo matching approach. This is the basis for an extension which incorporates temporal evidence in real time, resulting in increased temporal coherence of disparity maps – particularly in the presence of image noise. The second acquisition approach is a sensor fusion system which combines data from a noisy, low-resolution time-of-flight camera and a high-resolution colour video camera into a coherent, noise-free video with depth. The system consists of a three-step pipeline that aligns the video streams, efficiently removes and fills invalid and noisy geometry, and finally uses a spatiotemporal filter to increase the spatial resolution of the depth data and strongly reduce depth measurement noise. I show that these videos with depth empower a range of video processing effects that are not achievable using colour video alone. These effects critically rely on the geometric information, like a proposed video relighting technique which requires high-quality surface normals to produce plausible results. In addition, I demonstrate enhanced non-photorealistic rendering techniques and the ability to synthesise stereoscopic videos, which allows these effects to be applied stereoscopically. These stereoscopic renderings inspired me to study stereoscopic viewing discomfort. The result of this is a surprisingly simple computational model that predicts the visual comfort of stereoscopic images. I validated this model using a perceptual study, which showed that it correlates strongly with human comfort ratings. This makes it ideal for automatic comfort assessment, without the need for costly and lengthy perceptual studies
    corecore