7,267 research outputs found

    Embedded Implicit Stand-ins for Animated Meshes: a Case of Hybrid Modelling

    Get PDF
    In this paper we address shape modelling problems, encountered in computer animation and computer games development that are difficult to solve just using polygonal meshes. Our approach is based on a hybrid modelling concept that combines polygonal meshes with implicit surfaces. A hybrid model consists of an animated polygonal mesh and an approximation of this mesh by a convolution surface stand-in that is embedded within it or is attached to it. The motions of both objects are synchronised using a rigging skeleton. This approach is used to model the interaction between an animated mesh object and a viscoelastic substance, normally modelled in implicit form. The adhesive behaviour of the viscous object is modelled using geometric blending operations on the corresponding implicit surfaces. Another application of this approach is the creation of metamorphosing implicit surface parts that are attached to an animated mesh. A prototype implementation of the proposed approach and several examples of modelling and animation with near real-time preview times are presented

    Painterly rendering techniques: A state-of-the-art review of current approaches

    Get PDF
    In this publication we will look at the different methods presented over the past few decades which attempt to recreate digital paintings. While previous surveys concentrate on the broader subject of non-photorealistic rendering, the focus of this paper is firmly placed on painterly rendering techniques. We compare different methods used to produce different output painting styles such as abstract, colour pencil, watercolour, oriental, oil and pastel. Whereas some methods demand a high level of interaction using a skilled artist, others require simple parameters provided by a user with little or no artistic experience. Many methods attempt to provide more automation with the use of varying forms of reference data. This reference data can range from still photographs, video, 3D polygonal meshes or even 3D point clouds. The techniques presented here endeavour to provide tools and styles that are not traditionally available to an artist. Copyright © 2012 John Wiley & Sons, Ltd

    Feature based volumes for implicit intersections.

    Get PDF
    The automatic generation of volumes bounding the intersection of two implicit surfaces (isosurfaces of real functions of 3D point coordinates) or feature based volumes (FBV) is presented. Such FBVs are defined by constructive operations, function normalization and offsetting. By applying various offset operations to the intersection of two surfaces, we can obtain variations in the shape of an FBV. The resulting volume can be used as a boundary for blending operations applied to two corresponding volumes, and also for visualization of feature curves and modeling of surface based structures including microstructures

    A representational framework and user-interface for an image understanding workstation

    Get PDF
    Problems in image understanding involve a wide variety of data (e.g., image arrays, edge maps, 3-D shape models) and processes or algorithms (e.g., convolution, feature extraction, rendering). The underlying structure of an Image Understanding Workstation designed to support mulitple levels and types of representations for both data and processes is described, also the user-interface. The Image Understanding Workstation consists of two parts: the Image Understanding (IU) Framework, and the user-interface. The IU Framework is the set of data and process representations. It includes multiple levels of representation for data such as images (2-D), sketches (2-D), surfaces (2 1/2 D), and models (3-D). The representation scheme for processes characterizes their inputs, outputs, and parameters. Data and processes may reside on different classes of machines. The user-interface to the IU Workstation gives the user convenient access for creating, manipulating, transforming, and displaying image data. The user-interface follows the structure of the IU Framework and gives the user control over multiple types of data and processes. Both the IU Framework and user-interface are implemented on a LISP machine

    A note on the depth-from-defocus mechanism of jumping spiders

    Get PDF
    Jumping spiders are capable of estimating the distance to their prey relying only on the information from one of their main eyes. Recently, it has been shown that jumping spiders perform this estimation based on image defocus cues. In order to gain insight into the mechanisms involved in this blur-to-distance mapping as performed by the spider and to judge whether inspirations can be drawn from spider vision for depth-from-defocus computer vision algorithms, we constructed a three-dimensional (3D) model of the anterior median eye of the Metaphidippus aeneolus, a well studied species of jumping spider. We were able to study images of the environment as the spider would see them and to measure the performances of a well known depth-from-defocus algorithm on this dataset. We found that the algorithm performs best when using images that are averaged over the considerable thickness of the spider's receptor layers, thus pointing towards a possible functional role of the receptor thickness for the spider's depth estimation capabilities
    corecore