525 research outputs found

    Computer-assisted animation creation techniques for hair animation and shade, highlight, and shadow

    Get PDF
    制度:新 ; 報告番号:甲3062号 ; 学位の種類:博士(工学) ; 授与年月日:2010/2/25 ; 早大学位記番号:新532

    Towards sketch-based exploration of terrain : a feasibility study

    Get PDF
    CISRG discussion paper ; 1

    A Van Gogh inspired 3D Shader Methodology

    Get PDF
    This study develops an approach to developing surface shading for computer-generated 3D head models that adapts aesthetics from the post-impressionist portrait painting style of Vincent Van Gogh. This research is an attempt to reconcile a 2D expressionist style of painting and 3D digital computer generated imagery. The focus of this research is on developing a surface shading methodology for creating 3D impasto painterly renderings informed by Van Gogh’s self-portrait paintings. Visual analysis of several of Van Gogh’s self-portraits reveal the characteristics of his overall rendering style that are essential in designing methods for shading and texturing 3D head models. A method for shading is proposed using existing surfacing and rendering tools to create 3D digital heads rendered in Van Gogh’s style. The designed shading methodology describes procedures that generate brushstroke patterns. User controls for brushstroke profile, size, color and direction are provided to allow variations in the brushstroke patterns. These patterns are used to define thick oil paint surface properties for 3D digital models. A discussion of the range of results achieved using the designed shading methodology reveal the variations in the rendering style that can be achieved, which reflects a wide range of expressive 3D portrait rendering styles. Therefore, this study is useful in understanding Van Gogh’s expressive portrait painting style and in applying the essence of his work to synthesized 3D portraits

    Volumetric cloud generation using a Chinese brush calligraphy style

    Get PDF
    Includes bibliographical references.Clouds are an important feature of any real or simulated environment in which the sky is visible. Their amorphous, ever-changing and illuminated features make the sky vivid and beautiful. However, these features increase both the complexity of real time rendering and modelling. It is difficult to design and build volumetric clouds in an easy and intuitive way, particularly if the interface is intended for artists rather than programmers. We propose a novel modelling system motivated by an ancient painting style, Chinese Landscape Painting, to address this problem. With the use of only one brush and one colour, an artist can paint a vivid and detailed landscape efficiently. In this research, we develop three emulations of a Chinese brush: a skeleton-based brush, a 2D texture footprint and a dynamic 3D footprint, all driven by the motion and pressure of a stylus pen. We propose a hybrid mapping to generate both the body and surface of volumetric clouds from the brush footprints. Our interface integrates these components along with 3D canvas control and GPU-based volumetric rendering into an interactive cloud modelling system. Our cloud modelling system is able to create various types of clouds occurring in nature. User tests indicate that our brush calligraphy approach is preferred to conventional volumetric cloud modelling and that it produces convincing 3D cloud formations in an intuitive and interactive fashion. While traditional modelling systems focus on surface generation of 3D objects, our brush calligraphy technique constructs the interior structure. This forms the basis of a new modelling style for objects with amorphous shape

    Controllable Neural Synthesis for Natural Images and Vector Art

    Get PDF
    Neural image synthesis approaches have become increasingly popular over the last years due to their ability to generate photorealistic images useful for several applications, such as digital entertainment, mixed reality, synthetic dataset creation, computer art, to name a few. Despite the progress over the last years, current approaches lack two important aspects: (a) they often fail to capture long-range interactions in the image, and as a result, they fail to generate scenes with complex dependencies between their different objects or parts. (b) they often ignore the underlying 3D geometry of the shape/scene in the image, and as a result, they frequently lose coherency and details.My thesis proposes novel solutions to the above problems. First, I propose a neural transformer architecture that captures long-range interactions and context for image synthesis at high resolutions, leading to synthesizing interesting phenomena in scenes, such as reflections of landscapes onto water or flora consistent with the rest of the landscape, that was not possible to generate reliably with previous ConvNet- and other transformer-based approaches. The key idea of the architecture is to sparsify the transformer\u27s attention matrix at high resolutions, guided by dense attention extracted at lower image resolution. I present qualitative and quantitative results, along with user studies, demonstrating the effectiveness of the method, and its superiority compared to the state-of-the-art. Second, I propose a method that generates artistic images with the guidance of input 3D shapes. In contrast to previous methods, the use of a geometric representation of 3D shape enables the synthesis of more precise stylized drawings with fewer artifacts. My method outputs the synthesized images in a vector representation, enabling richer downstream analysis or editing in interactive applications. I also show that the method produces substantially better results than existing image-based methods, in terms of predicting artists’ drawings and in user evaluation of results

    A workflow for designing stylized shading effects

    Get PDF
    In this report, we describe a workflow for designing stylized shading effects on a 3D object, targeted at technical artists. Shading design, the process of making the illumination of an object in a 3D scene match an artist vision, is usually a time-consuming task because of the complex interactions between materials, geometry, and lighting environment. Physically based methods tend to provide an intuitive and coherent workflow for artists, but they are of limited use in the context of non-photorealistic shading styles. On the other hand, existing stylized shading techniques are either too specialized or require considerable hand-tuning of unintuitive parameters to give a satisfactory result. Our contribution is to separate the design process of individual shading effects in three independent stages: control of its global behavior on the object, addition of procedural details, and colorization. Inspired by the formulation of existing shading models, we expose different shading behaviors to the artist through parametrizations, which have a meaningful visual interpretation. Multiple shading effects can then be composited to obtain complex dynamic appearances. The proposed workflow is fully interactive, with real-time feedback, and allows the intuitive exploration of stylized shading effects, while keeping coherence under varying viewpoints and light configurations. Furthermore, our method makes use of the deferred shading technique, making it easily integrable in existing rendering pipelines.Dans ce rapport, nous décrivons un outil de création de modèles d'illumination adapté à la stylisation de scènes 3D. Contrairement aux modèles d'illumination photoréalistes, qui suivent des contraintes physiques, les modèles d'illumination stylisés répondent à des contraintes artistiques, souvent inspirées de la représentation de la lumière en illustration. Pour cela, la conception de ces modèles stylisés est souvent complexe et coûteuse en temps. De plus, ils doivent produire un résultat cohérent sous une multitude d'angles de vue et d'éclairages. Nous proposons une méthode qui facilite la création d'effets d'illumination stylisés, en décomposant le processus en trois parties indépendantes: contrôle du comportement global de l'illumination, ajout de détails procéduraux, et colorisation.Différents comportements d'illumination sont accessibles à travers des paramétrisations, qui ont une interprétation visuelle, et qui peuvent être combinées pour obtenir des apparences plus complexes. La méthode proposée est interactive, et permet l'exploration efficace de modèles d'illumination stylisés. La méthode est implémentée avec la technique de deferred shading, ce qui la rend facilement utilisable dans des pipelines de rendu existants

    TangiPaint: Interactive tangible media

    Get PDF
    Currently, there is a wide disconnection between the real and virtual worlds in computer graphics. Art created with textured paints on canvases have visual effects which naturally supplement simple color. Real paint exhibits shadows and highlights, which change in response to viewing and lighting directions. The colors interact with this environment and can produce very noticeable effects. Additionally, the traditional means of human-computer interaction using a keyboard and mouse is unnatural and inefficient---gestures and actions are not performed on the objects themselves. These visual effects and natural interactions are missing from digital media in the virtual world. The absence of these visual characteristics disconnects users from their content. Our research looks into simulating these missing pieces and reconnecting users. TangiPaint is an interactive, tangible application for creating and exploring digital media. It gives the experience of working with real materials, such as oil paints and textured canvases, on a digital display. TangiPaint implements natural gestures and allows users to directly interact with their work. The Tangible Display technology allows users to tilt and reorient the device and screen to see the subtle gloss, shadow, and impasto lighting effects of the simulated surface. To simulate realistic lighting effects we use a Ward BRDF illumination model. This model is implemented as an OpenGL shader program. Our system tracks the texture and relief of a piece of art by saving topographical information. We implement height fields, normal vectors, and parameter maps to store this information. These textures are submitted to the lighting model that renders a final product. TangiPaint builds on previous work and applications in this area, but is the first to integrate these aspects into a single software application. The system is entirely self-contained and implemented on the Apple iOS platforms, the iPhone, iPad, and iPod Touch. No additional hardware is required and the interface is easy to learn and use. TangiPaint is a step in the direction of interactive digital art media that looks and behaves like real materials
    corecore