47 research outputs found

    Stroke Pattern Analysis and Synthesis

    Get PDF
    International audienceWe present a synthesis technique that can automatically generate stroke patterns based on a user-specified reference pattern. Our method is an extension of texture synthesis techniques to vector-based patterns. Such an extension requires (a) an analysis of the pattern properties to extract meaningful pattern elements (defined as clusters of strokes) and (b) a synthesis algorithm based on similarities in the detected stroke clusters. Our method is based on results from human vision research concerning perceptual organization. The resulting synthesized patterns effectively reproduce the properties of the input patterns, and can be used to fill both 1D paths and 2D regions

    Abstract of “Art-based Modeling and Rendering for Computer Graphics ” by Lee Markosian,

    No full text
    Over the centuries artists and illustrators have developed techniques to effectively convey visual information. In this dissertation we develop the idea that we can apply these techniques to increase the expressive power of 3D computer graphics. This leads us to seek to build a unified free-form modeling system with which a designer can amplify her skills with pencil and paper to model both the geometry and stylized look of virtual scenes. In part I we first develop algorithms for rendering finely tessellated smooth surfaces in the style of simple line drawings, and at interactive rates. We next develop a procedural texture framework that lets us divide a model into distinct regions, with each rendered according to what it represents (bricks on the walls of a castle, say, but wood planks on the drawbridge). We then use this framework to develop two new classes of rendering algorithms – one class performs simple hatched shading, the other adopts techniques of the children’s book illustrator Dr. Seuss (and others) to render fur, grass and trees in a stylized manner. In part II we focus on the problem of modeling a scene’s geometry through an interface that leverages an artist’s 2D drawing skills. We begin with a new technique for constructing 3D curves from 2D input: The user draws a curve and its shadow as both would appear from a given viewpoint, and the system computes the corresponding 3D curve. We next describe a new algorithm for computing a free-form surface that smoothly fits over a collection of “primitives ” such as generalized cylinders or other “swept ” objects. Our intention is to integrate the two techniques so that an artist can quickly sketch such primitives with the help of the first technique, then oversketch them with the second technique to produce the desired free-form surface. The natural next step is to integrate the various parts into a single system for sketching both 3D shapes and the stylized rendering algorithms used to depict them

    Kyuman Jeong POSTECH Detail Control in Line Drawings of 3D Meshes

    No full text
    and suggestive contours at fine scales appear as many small specks. Middle: the same model rendered with controlled level of detail (approximately 70,000 polygons). Right: additional detail is generated when the camera zooms in. We address the problem of rendering a 3D mesh in the style of a line drawing, in which little or no shading is used and instead shape cues are provided by silhouettes and suggestive contours. Our specific goal is to depict shape features at a desired scale. For example, when mesh triangles project into the image plane at sub-pixel sizes, both suggestive contours and silhouettes may form dense networks that convey shape poorly. The solution we propose is to convert the input mesh to a multi-resolution representation (specifically, a progressive mesh), then view-dependently refine or coarsen the mesh to control the size of its triangles in image space. We thereby control the scale of shape features that are depicted via silhouettes and suggestive contours. We propose a novel refinement criterion that achieves this goal, and we address the problem of maintaining temporal coherence of silhouette and suggestive contours when extracting them from a changing mesh. 1

    Line drawings via abstracted shading

    No full text
    We describe a GPU-based algorithm for rendering a 3D model as a line drawing, based on the insight that a line drawing can be understood as an abstraction of a shaded image. We thus render lines along tone boundaries or thin dark areas in the shaded image. We extend this notion to the dual: we render highlight lines along thin bright areas and tone boundaries. We combine the lines with toon shading to capture broad regions of tone. The resulting line drawings effectively convey both shape and material cues. The lines produced by the method can include silhouettes, creases, and ridges, along with a generalization of suggestive contours that responds to lighting as well as viewing changes. The method supports automatic level of abstraction, where the size of depicted shape features adjusts appropriately as the camera zooms in or out. Animated models can be rendered in real time because costly mesh curvature calculations are not needed.X1119sciescopu

    Line drawings via abstracted shading

    Get PDF
    Copyright Notic

    Multi-Scale Line Drawings from 3D Meshes

    No full text
    1

    Artistic silhouettes: a hybrid approach

    No full text
    We present a new algorithm for rendering silhouette outlines of 3D polygonal meshes with stylized strokes. Rather than use silhouette edges of the model directly as the basis for drawing strokes, we first process the edges in image space to create long, connected paths corresponding to visible portions of silhouettes. The resulting paths have the precision of object-space edges, but avoid the unwanted zig-zagging and inconsistent visibility of raw silhouette edges. Our hybrid screen/object space approach thus allows us to apply stylizations to strokes that follow the visual silhouettes of an object. We describe details of our OpenGL-based stylized strokes that can resemble natural media, but render at interactive rates. We demonstrate our technique with the accompanying still images and animations rendered with our technique
    corecore