9 research outputs found

    Analysis of Design Principles and Requirements for Procedural Rigging of Bipeds and Quadrupeds Characters with Custom Manipulators for Animation

    Full text link
    Character rigging is a process of endowing a character with a set of custom manipulators and controls making it easy to animate by the animators. These controls consist of simple joints, handles, or even separate character selection windows.This research paper present an automated rigging system for quadruped characters with custom controls and manipulators for animation.The full character rigging mechanism is procedurally driven based on various principles and requirements used by the riggers and animators. The automation is achieved initially by creating widgets according to the character type. These widgets then can be customized by the rigger according to the character shape, height and proportion. Then joint locations for each body parts are calculated and widgets are replaced programmatically.Finally a complete and fully operational procedurally generated character control rig is created and attached with the underlying skeletal joints. The functionality and feasibility of the rig was analyzed from various source of actual character motion and a requirements criterion was met. The final rigged character provides an efficient and easy to manipulate control rig with no lagging and at high frame rate.Comment: 21 pages, 24 figures, 4 Algorithms, Journal Pape

    A Novel Skeleton Extraction Algorithm for 3d Wireless Sensor Networks

    Get PDF
    Wireless sensor network design is critical and resource allocation is a major problem which remains to be solved satisfactorily. The discrete nature of sensor networks renders the existing skeleton extraction algorithms inapplicable. 3D topologies of sensor networks for practical scenarios are considered in this paper and the research carried out in the field of skeleton extraction for three dimensional wireless sensor networks. A skeleton extraction algorithm applicable to complex 3D spaces of sensor networks is introduced in this paper and is represented in the form of a graph. The skeletal links are identified on the basis of a novel energy utilization function computed for the transmissions carried out through the network. The frequency based weight assignment function is introduced to identify the root node of the skeleton graph. Topological clustering is used to construct the layered topological sets to preserve the nature of the topology in the skeleton graph. The skeleton graph is constructed with the help of the layered topological sets and the experimental results prove the robustness of the skeleton extraction algorithm introduced. Provisioning of additional resources to skeletal nodes enhances the sensor network performance by 20% as proved by the results presented in this paper

    Automatic skeletonization and skin attachment for realistic character animation.

    Get PDF
    The realism of character animation is associated with a number of tasks ranging from modelling, skin defonnation, motion generation to rendering. In this research we are concerned with two of them: skeletonization and weight assignment for skin deformation. The fonner is to generate a skeleton, which is placed within the character model and links the motion data to the skin shape of the character. The latter assists the modelling of realistic skin shape when a character is in motion. In the current animation production practice, the task of skeletonization is primarily undertaken by hand, i.e. the animator produces an appropriate skeleton and binds it with the skin model of a character. This is inevitably very time-consuming and costs a lot of labour. In order to improve this issue, in this thesis we present an automatic skeletonization framework. It aims at producing high-quality animatible skeletons without heavy human involvement while allowing the animator to maintain the overall control of the process. In the literature, the tenn skeletonization can have different meanings. Most existing research on skeletonization is in the remit of CAD (Computer Aided Design). Although existing research is of significant reference value to animation, their downside is the skeleton generated is either not appropriate for the particular needs of animation, or the methods are computationally expensive. Although some purpose-build animation skeleton generation techniques exist, unfortunately they rely on complicated post-processing procedures, such as thinning and pruning, which again can be undesirable. The proposed skeletonization framework makes use of a new geometric entity known as the 3D silhouette that is an ordinary silhouette with its depth information recorded. We extract a curve skeleton from two 3D silhouettes of a character detected from its two perpendicular projections. The skeletal joints are identified by down sampling the curve skeleton, leading to the generation of the final animation skeleton. The efficiency and quality are major performance indicators in animation skeleton generation. Our framework achieves the former by providing a 2D solution to the 3D skeletonization problem. Reducing in dimensions brings much faster performances. Experiments and comparisons are carried out to demonstrate the computational simplicity. Its accuracy is also verified via these experiments and comparisons. To link a skeleton to the skin, accordingly we present a skin attachment framework aiming at automatic and reasonable weight distribution. It differs from the conventional algorithms in taking topological information into account during weight computation. An effective range is defined for a joint. Skin vertices located outside the effective range will not be affected by this joint. By this means, we provide a solution to remove the influence of a topologically distant, hence highly likely irrelevant joint on a vertex. A user-defined parameter is also provided in this algorithm, which allows different deformation effects to be obtained according to user's needs. Experiments and comparisons prove that the presented framework results in weight distribution of good quality. Thus it frees animators from tedious manual weight editing. Furthermore, it is flexible to be used with various deformation algorithms

    Automatic Animation Skeleton Construction Using Repulsive Force Field

    No full text
    A method is proposed in this paper to automatically gen-erate the animation skeleton of a model such that the model can be manipulated according to the skeleton. With our method, users can construct the skeleton in a short time, and bring a static model both dynamic and alive. The primary steps of our method are finding skeleton joints, connecting the joints to form an animation skeleton, and binding skin vertices to the skeleton. Initially, a repul-sive force field is constructed inside a given model, and a set of points with local minimal force magnitude are found based on the force field. Then, a modified thinning algo-rithm is applied to generate an initial skeleton, which is further refined to become the final result. When the skeleton construction completes, skin vertices are anchored to the skeleton joints according to the distances between the ver-tices and joints. In order to build the repulsive force field, hundreds of rays are shot radially from positions inside the model, and it leads to that the force field computation takes most of the execution time. Therefore, an octree structure is used to accelerate this process. Currently, the skeleton gen-erated from a typical 3D model with 1000 to 10000 poly-gons takes less than 2 minutes on a Intel Pentium 4 2.4 GHz PC

    Sketching-based Skeleton Extraction

    Get PDF
    Articulated character animation can be performed by manually creating and rigging a skeleton into an unfolded 3D mesh model. Such tasks are not trivial, as they require a substantial amount of training and practice. Although methods have been proposed to help automatic extraction of skeleton structure, they may not guarantee that the resulting skeleton can help to produce animations according to user manipulation. We present a sketching-based skeleton extraction method to create a user desired skeleton structure which is used in 3D model animation. This method takes user sketching as an input, and based on the mesh segmentation result of a 3D mesh model, generates a skeleton for articulated character animation. In our system, we assume that a user will properly sketch bones by roughly following the mesh model structure. The user is expected to sketch independently on different regions of a mesh model for creating separate bones. For each sketched stroke, we project it into the mesh model so that it becomes the medial axis of its corresponding mesh model region from the current viewer perspective. We call this projected stroke a “sketched bone”. After pre-processing user sketched bones, we cluster them into groups. This process is critical as user sketching can be done from any orientation of a mesh model. To specify the topology feature for different mesh parts, a user can sketch strokes from different orientations of a mesh model, as there may be duplicate strokes from different orientations for the same mesh part. We need a clustering process to merge similar sketched bones into one bone, which we call a “reference bone”. The clustering process is based on three criteria: orientation, overlapping and locality. Given the reference bones as the input, we adopt a mesh segmentation process to assist our skeleton extraction method. To be specific, we apply the reference bones and the seed triangles to segment the input mesh model into meaningful segments using a multiple-region growing mechanism. The seed triangles, which are collected from the reference bones, are used as the initial seeds in the mesh segmentation process. We have designed a new segmentation metric [1] to form a better segmentation criterion. Then we compute the Level Set Diagrams (LSDs) on each mesh part to extract bones and joints. To construct the final skeleton, we connect bones extracted from all mesh parts together into a single structure. There are three major steps involved: optimizing and smoothing bones, generating joints and forming the skeleton structure. After constructing the skeleton model, we have proposed a new method, which utilizes the Linear Blend Skinning (LBS) technique and the Laplacian mesh deformation technique together to perform skeleton-driven animation. Traditional LBS techniques may have self-intersection problem in regions around segmentation boundaries. Laplacian mesh deformation can preserve the local surface details, which can eliminate the self-intersection problem. In this case, we make use of LBS result as the positional constraint to perform a Laplacian mesh deformation. By using the Laplacian mesh deformation method, we maintain the surface details in segmentation boundary regions. This thesis outlines a novel approach to construct a 3D skeleton model interactively, which can also be used in 3D animation and 3D model matching area. The work is motivated by the observation that either most of the existing automatic skeleton extraction methods lack well-positioned joints specification or the manually generated methods require too much professional training to create a good skeleton structure. We dedicate a novel approach to create 3D model skeleton based on user sketching which specifies articulated skeleton with joints. The experimental results show that our method can produce better skeletons in terms of joint positions and topological structure

    Hybrid sketching : a new middle ground between 2- and 3-D

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Architecture, 2005.Includes bibliographical references (leaves 124-133).This thesis investigates the geometric representation of ideas during the early stages of design. When a designer's ideas are still in gestation, the exploration of form is more important than its precise specification. Digital modelers facilitate such exploration, but only for forms built with discrete collections of high-level geometric primitives; we introduce techniques that operate on designers' medium of choice, 2-D sketches. Designers' explorations also shift between 2-D and 3-D, yet 3-D form must also be specified with these high-level primitives, requiring an entirely different mindset from 2-D sketching. We introduce a new approach to transform existing 2-D sketches directly into a new kind of sketch-like 3-D model. Finally, we present a novel sketching technique that removes the distinction between 2-D and 3-D altogether. This thesis makes five contributions: point-dragging and curve-drawing techniques for editing sketches; two techniques to help designers bring 2-D sketches to 3-D; and a sketching interface that dissolves the boundaries between 2-D and 3-D representation. The first two contributions of this thesis introduce smooth exploration techniques that work on sketched form composed of strokes, in 2-D or 3-D. First, we present a technique, inspired by classical painting practices, whereby the designer can explore a range of curves with a single stroke. As the user draws near an existing curve, our technique automatically and interactively replaces sections of the old curve with the new one. Second, we present a method to enable smooth exploration of sketched form by point-dragging. The user constructs a high-level "proxy" description that can be used, somewhat like a skeleton, to deform a sketch independent of(cont.) the internal stroke description. Next, we leverage the proxy deformation capability to help the designer move directly from existing 2-D sketches to 3-D models. Our reconstruction techniques generate a novel kind of 3-D model which maintains the appearance and stroke structure of the original 2-D sketch. One technique transforms a single sketch with help from annotations by the designer; the other combines two sketches. Since these interfaces are user-guided, they can operate on ambiguous sketches, relying on the designer to choose an interpretation. Finally, we present an interface to build an even sparser, more suggestive, type of 3-D model, either from existing sketches or from scratch. "Camera planes" provide a complex 3-D scaffolding on which to hang sketches, which can still be drawn as rapidly and freely as before. A sparse set of 2-D sketches placed on planes provides a novel visualization of 3-D form, with enough information present to suggest 3-D shape, but enough missing that the designer can 'read into' the form, seeing multiple possibilities. This unspecified information--this empty space--can spur the designer on to new ideas.by John Alex.Ph.D

    Mesh modification using deformation gradients

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references (p. 117-131).Computer-generated character animation, where human or anthropomorphic characters are animated to tell a story, holds tremendous potential to enrich education, human communication, perception, and entertainment. However, current animation procedures rely on a time consuming and difficult process that requires both artistic talent and technical expertise. Despite the tremendous amount of artistry, skill, and time dedicated to the animation process, there are few techniques to help with reuse. Although individual aspects of animation are well explored, there is little work that extends beyond the boundaries of any one area. As a consequence, the same procedure must be followed for each new character without the opportunity to generalize or reuse technical components. This dissertation describes techniques that ease the animation process by offering opportunities for reuse and a more intuitive animation formulation. A differential specification of arbitrary deformation provides a general representation for adapting deformation to different shapes, computing semantic correspondence between two shapes, and extrapolating natural deformation from a finite set of examples.(cont.) Deformation transfer adds a general-purpose reuse mechanism to the animation pipeline by transferring any deformation of a source triangle mesh onto a different target mesh. The transfer system uses a correspondence algorithm to build a discrete many-to-many mapping between the source and target triangles that permits transfer between meshes of different topology. Results demonstrate retargeting both kinematic poses and non-rigid deformations, as well as transfer between characters of different topological and anatomical structure. Mesh-based inverse kinematics extends the idea of traditional skeleton-based inverse kinematics to meshes by allowing the user to pose a mesh via direct manipulation. The user indicates the dass of meaningful deformations by supplying examples that can be created automatically with deformation transfer, sculpted, scanned, or produced by any other means. This technique is distinguished from traditional animation methods since it avoids the expensive character setup stage. It is distinguished from existing mesh editing algorithms since the user retains the freedom to specify the class of meaningful deformations. Results demonstrate an intuitive interface for posing meshes that requires only a small amount of user effort.by Robert Walker Sumner.Ph.D
    corecore