88 research outputs found

    A study of how Chinese ink painting features can be applied to 3D scenes and models in real-time rendering

    Get PDF
    Past research findings addressed mature techniques for non-photorealistic rendering. However, research findings indicate that there is little information dealing with efficient methods to simulate Chinese ink painting features in rendering 3D scenes. Considering that Chinese ink painting has achieved many worldwide awards, the potential to effectively and automatically develop 3D animations and games in this style indicates a need for the development of appropriate technology for the future market. The goal of this research is about rendering 3D meshes in a Chinese ink painting style which is both appealing and realistic. Specifically, how can the output image appear similar to a hand-drawn Chinese ink painting. And how efficient does the rendering pipeline have to be to result in a real-time scene. For this study the researcher designed two rendering pipelines for static objects and moving objects in the final scene. The entire rendering process includes interior shading, silhouette extracting, textures integrating, and background rendering. Methodology involved the use of silhouette detection, multiple rendering passes, Gaussian blur for anti-aliasing, smooth step functions, and noise textures for simulating ink textures. Based on the output of each rendering pipeline, rendering process of the scene with best looking of Chinese ink painting style is illustrated in detail. The speed of the rendering pipeline proposed by this research was tested. The framerate of the final scenes created with this pipeline was higher than 30fps, a level considered to be real-time. One can conclude that the main objective of the research study was met even though other methods for generating Chinese ink painting rendering are available and should be explored

    Art Directed Shader for Real Time Rendering - Interactive 3D Painting

    Get PDF
    In this work, I develop an approach to include Global Illumination (GI) effects in non-photorealistic real-time rendering; real-time rendering is one of the main areas of focus in the gaming industry and the booming virtual reality(VR) and augmented reality(AR) industries. My approach is based on adapting the Barycentric shader to create a wide variety of painting effects. This shader helps achieve the look of a 2D painting in an interactively rendered 3D scene. The shader accommodates robust computation to obtain artistic reflection and refraction. My contributions can be summarized as follows: Development of a generalized Barycentric shader that can provide artistic control, integration of this generalized Barycentric shader into an interactive ray tracer, and interactive rendering of a 3D scene that closely represent the reference painting

    Exploiting the GPU power for intensive geometric and imaging data computation.

    Get PDF
    Wang Jianqing.Thesis (M.Phil.)--Chinese University of Hong Kong, 2004.Includes bibliographical references (leaves 81-86).Abstracts in English and Chinese.Chapter 1 --- Introduction --- p.1Chapter 1.1 --- Overview --- p.1Chapter 1.2 --- Thesis --- p.3Chapter 1.3 --- Contributions --- p.4Chapter 1.4 --- Organization --- p.6Chapter 2 --- Programmable Graphics Hardware --- p.8Chapter 2.1 --- Introduction --- p.8Chapter 2.2 --- Why Use GPU? --- p.9Chapter 2.3 --- Programmable Graphics Hardware Architecture --- p.11Chapter 2.4 --- Previous Work on GPU Computation --- p.15Chapter 3 --- Multilingual Virtual Performer --- p.17Chapter 3.1 --- Overview --- p.17Chapter 3.2 --- Previous Work --- p.18Chapter 3.3 --- System Overview --- p.20Chapter 3.4 --- Facial Animation --- p.22Chapter 3.4.1 --- Facial Animation using Face Space --- p.23Chapter 3.4.2 --- Face Set Selection for Lip Synchronization --- p.27Chapter 3.4.3 --- The Blending Weight Function Generation and Coartic- ulation --- p.33Chapter 3.4.4 --- Expression Overlay --- p.38Chapter 3.4.5 --- GPU Algorithm --- p.39Chapter 3.5 --- Character Animation --- p.44Chapter 3.5.1 --- Skeletal Animation Primer --- p.44Chapter 3.5.2 --- Mathematics of Kinematics --- p.46Chapter 3.5.3 --- Animating with Motion Capture Data --- p.48Chapter 3.5.4 --- Skeletal Subspace Deformation --- p.49Chapter 3.5.5 --- GPU Algorithm --- p.50Chapter 3.6 --- Integration of Skeletal and Facial Animation --- p.52Chapter 3.7 --- Result --- p.53Chapter 3.7.1 --- Summary --- p.58Chapter 4 --- Discrete Wavelet Transform On GPU --- p.60Chapter 4.1 --- Introduction --- p.60Chapter 4.1.1 --- Previous Works --- p.61Chapter 4.1.2 --- Our Solution --- p.61Chapter 4.2 --- Multiresolution Analysis with Wavelets --- p.62Chapter 4.3 --- Fragment Processor for Pixel Processing --- p.64Chapter 4.4 --- DWT Pipeline --- p.65Chapter 4.4.1 --- Convolution Versus Lifting --- p.65Chapter 4.4.2 --- DWT Pipeline --- p.67Chapter 4.5 --- Forward DWT --- p.68Chapter 4.6 --- Inverse DWT --- p.71Chapter 4.7 --- Results and Applications --- p.73Chapter 4.7.1 --- Geometric Deformation in Wavelet Domain --- p.73Chapter 4.7.2 --- Stylish Image Processing and Texture-illuminance De- coupling --- p.73Chapter 4.7.3 --- Hardware-Accelerated JPEG2000 Encoding --- p.75Chapter 4.8 --- Web Information --- p.78Chapter 5 --- Conclusion --- p.79Bibliography --- p.8

    Painting Many Pasts: Synthesizing Time Lapse Videos of Paintings

    Full text link
    We introduce a new video synthesis task: synthesizing time lapse videos depicting how a given painting might have been created. Artists paint using unique combinations of brushes, strokes, and colors. There are often many possible ways to create a given painting. Our goal is to learn to capture this rich range of possibilities. Creating distributions of long-term videos is a challenge for learning-based video synthesis methods. We present a probabilistic model that, given a single image of a completed painting, recurrently synthesizes steps of the painting process. We implement this model as a convolutional neural network, and introduce a novel training scheme to enable learning from a limited dataset of painting time lapses. We demonstrate that this model can be used to sample many time steps, enabling long-term stochastic video synthesis. We evaluate our method on digital and watercolor paintings collected from video websites, and show that human raters find our synthetic videos to be similar to time lapse videos produced by real artists. Our code is available at https://xamyzhao.github.io/timecraft.Comment: 10 pages, CVPR 202

    THE REALISM OF ALGORITHMIC HUMAN FIGURES A Study of Selected Examples 1964 to 2001

    Get PDF
    It is more than forty years since the first wireframe images of the Boeing Man revealed a stylized hu-man pilot in a simulated pilot's cabin. Since then, it has almost become standard to include scenes in Hollywood movies which incorporate virtual human actors. A trait particularly recognizable in the games industry world-wide is the eagerness to render athletic muscular young men, and young women with hour-glass body-shapes, to traverse dangerous cyberworlds as invincible heroic figures. Tremendous efforts in algorithmic modeling, animation and rendering are spent to produce a realistic and believable appearance of these algorithmic humans. This thesis develops two main strands of research by the interpreting a selection of examples. Firstly, in the computer graphics context, over the forty years, it documents the development of the creation of the naturalistic appearance of images (usually called photorealism ). In particular, it de-scribes and reviews the impact of key algorithms in the course of the journey of the algorithmic human figures towards realism . Secondly, taking a historical perspective, this work provides an analysis of computer graphics in relation to the concept of realism. A comparison of realistic images of human figures throughout history with their algorithmically-generated counterparts allows us to see that computer graphics has both learned from previous and contemporary art movements such as photorealism but also taken out-of-context elements, symbols and properties from these art movements with a questionable naivety. Therefore, this work also offers a critique of the justification of the use of their typical conceptualization in computer graphics. Although the astounding technical achievements in the field of algorithmically-generated human figures are paralleled by an equally astounding disregard for the history of visual culture, from the beginning 1964 till the breakthrough 2001, in the period of the digital information processing machine, a new approach has emerged to meet the apparently incessant desire of humans to create artificial counterparts of themselves. Conversely, the theories of traditional realism have to be extended to include new problems that those active algorithmic human figures present

    Computer-Assisted Interactive Documentary and Performance Arts in Illimitable Space

    Get PDF
    This major component of the research described in this thesis is 3D computer graphics, specifically the realistic physics-based softbody simulation and haptic responsive environments. Minor components include advanced human-computer interaction environments, non-linear documentary storytelling, and theatre performance. The journey of this research has been unusual because it requires a researcher with solid knowledge and background in multiple disciplines; who also has to be creative and sensitive in order to combine the possible areas into a new research direction. [...] It focuses on the advanced computer graphics and emerges from experimental cinematic works and theatrical artistic practices. Some development content and installations are completed to prove and evaluate the described concepts and to be convincing. [...] To summarize, the resulting work involves not only artistic creativity, but solving or combining technological hurdles in motion tracking, pattern recognition, force feedback control, etc., with the available documentary footage on film, video, or images, and text via a variety of devices [....] and programming, and installing all the needed interfaces such that it all works in real-time. Thus, the contribution to the knowledge advancement is in solving these interfacing problems and the real-time aspects of the interaction that have uses in film industry, fashion industry, new age interactive theatre, computer games, and web-based technologies and services for entertainment and education. It also includes building up on this experience to integrate Kinect- and haptic-based interaction, artistic scenery rendering, and other forms of control. This research work connects all the research disciplines, seemingly disjoint fields of research, such as computer graphics, documentary film, interactive media, and theatre performance together.Comment: PhD thesis copy; 272 pages, 83 figures, 6 algorithm

    Human-Art: A Versatile Human-Centric Dataset Bridging Natural and Artificial Scenes

    Full text link
    Humans have long been recorded in a variety of forms since antiquity. For example, sculptures and paintings were the primary media for depicting human beings before the invention of cameras. However, most current human-centric computer vision tasks like human pose estimation and human image generation focus exclusively on natural images in the real world. Artificial humans, such as those in sculptures, paintings, and cartoons, are commonly neglected, making existing models fail in these scenarios. As an abstraction of life, art incorporates humans in both natural and artificial scenes. We take advantage of it and introduce the Human-Art dataset to bridge related tasks in natural and artificial scenarios. Specifically, Human-Art contains 50k high-quality images with over 123k person instances from 5 natural and 15 artificial scenarios, which are annotated with bounding boxes, keypoints, self-contact points, and text information for humans represented in both 2D and 3D. It is, therefore, comprehensive and versatile for various downstream tasks. We also provide a rich set of baseline results and detailed analyses for related tasks, including human detection, 2D and 3D human pose estimation, image generation, and motion transfer. As a challenging dataset, we hope Human-Art can provide insights for relevant research and open up new research questions.Comment: CVPR202

    Atmospheric cloud modeling methods in computer graphics: A review, trends, taxonomy, and future directions

    Get PDF
    The modeling of atmospheric clouds is one of the crucial elements in the natural phenomena visualization system. Over the years, a wide range of approaches has been proposed on this topic to deal with the challenging issues associated with visual realism and performance. However, the lack of recent review papers on the atmospheric cloud modeling methods available in computer graphics makes it difficult for researchers and practitioners to understand and choose the well-suited solutions for developing the atmospheric cloud visualization system. Hence, we conducted a comprehensive review to identify, analyze, classify, and summarize the existing atmospheric cloud modeling solutions. We selected 113 research studies from recognizable data sources and analyzed the research trends on this topic. We defined a taxonomy by categorizing the atmospheric cloud modeling methods based on the methods' similar characteristics and summarized each of the particular methods. Finally, we underlined several research issues and directions for potential future work. The review results provide an overview and general picture of the atmospheric cloud modeling methods that would be beneficial for researchers and practitioners

    Fast Accurate and Automatic Brushstroke Extraction

    Get PDF
    Brushstrokes are viewed as the artist’s “handwriting” in a painting. In many applications such as style learning and transfer, mimicking painting, and painting authentication, it is highly desired to quantitatively and accurately identify brushstroke characteristics from old masters’ pieces using computer programs. However, due to the nature of hundreds or thousands of intermingling brushstrokes in the painting, it still remains challenging. This article proposes an efficient algorithm for brush Stroke extraction based on a Deep neural network, i.e., DStroke. Compared to the state-of-the-art research, the main merit of the proposed DStroke is to automatically and rapidly extract brushstrokes from a painting without manual annotation, while accurately approximating the real brushstrokes with high reliability. Herein, recovering the faithful soft transitions between brushstrokes is often ignored by the other methods. In fact, the details of brushstrokes in a master piece of painting (e.g., shapes, colors, texture, overlaps) are highly desired by artists since they hold promise to enhance and extend the artists’ powers, just like microscopes extend biologists’ powers. To demonstrate the high efficiency of the proposed DStroke, we perform it on a set of real scans of paintings and a set of synthetic paintings, respectively. Experiments show that the proposed DStroke is noticeably faster and more accurate at identifying and extracting brushstrokes, outperforming the other methods
    • …
    corecore