32 research outputs found

    Automatic skeletonization and skin attachment for realistic character animation.

    Get PDF
    The realism of character animation is associated with a number of tasks ranging from modelling, skin defonnation, motion generation to rendering. In this research we are concerned with two of them: skeletonization and weight assignment for skin deformation. The fonner is to generate a skeleton, which is placed within the character model and links the motion data to the skin shape of the character. The latter assists the modelling of realistic skin shape when a character is in motion. In the current animation production practice, the task of skeletonization is primarily undertaken by hand, i.e. the animator produces an appropriate skeleton and binds it with the skin model of a character. This is inevitably very time-consuming and costs a lot of labour. In order to improve this issue, in this thesis we present an automatic skeletonization framework. It aims at producing high-quality animatible skeletons without heavy human involvement while allowing the animator to maintain the overall control of the process. In the literature, the tenn skeletonization can have different meanings. Most existing research on skeletonization is in the remit of CAD (Computer Aided Design). Although existing research is of significant reference value to animation, their downside is the skeleton generated is either not appropriate for the particular needs of animation, or the methods are computationally expensive. Although some purpose-build animation skeleton generation techniques exist, unfortunately they rely on complicated post-processing procedures, such as thinning and pruning, which again can be undesirable. The proposed skeletonization framework makes use of a new geometric entity known as the 3D silhouette that is an ordinary silhouette with its depth information recorded. We extract a curve skeleton from two 3D silhouettes of a character detected from its two perpendicular projections. The skeletal joints are identified by down sampling the curve skeleton, leading to the generation of the final animation skeleton. The efficiency and quality are major performance indicators in animation skeleton generation. Our framework achieves the former by providing a 2D solution to the 3D skeletonization problem. Reducing in dimensions brings much faster performances. Experiments and comparisons are carried out to demonstrate the computational simplicity. Its accuracy is also verified via these experiments and comparisons. To link a skeleton to the skin, accordingly we present a skin attachment framework aiming at automatic and reasonable weight distribution. It differs from the conventional algorithms in taking topological information into account during weight computation. An effective range is defined for a joint. Skin vertices located outside the effective range will not be affected by this joint. By this means, we provide a solution to remove the influence of a topologically distant, hence highly likely irrelevant joint on a vertex. A user-defined parameter is also provided in this algorithm, which allows different deformation effects to be obtained according to user's needs. Experiments and comparisons prove that the presented framework results in weight distribution of good quality. Thus it frees animators from tedious manual weight editing. Furthermore, it is flexible to be used with various deformation algorithms

    Improving automatic rigging for 3D humanoid characters

    Get PDF
    In the field of computer animation the process of creating an animated character is usually a long and tedious task. An animation character is usually efined by a 3D mesh (a set of triangles in the space) that gives its external appearance or shape to the character. It also used to have an inner structure, the skeleton. When a skeleton is associated to a character mesh, this association is called skeleton binding, and a skeleton bound to a character mesh is an animation rig. Rigging from scratch a character can be a very boring process. The definition and creation of a centered skeleton together with the ’painting’, by an artist,of the influence parameters between the skeleton and the mesh (the skinning) s the most demanding part to achieve an acceptable behavior for a character. This rigging process can be simplified and accelerated using an automatic rigging method. Automatic rigging methods consist in taking as input a 3D mesh, generate a skeleton based in the shape of the original model, bound the input mesh to the generated skeleton, and finally to compute a set of parameters based in a chosen skinning method. The main objective of this thesis is to generate a method for rigging a 3D arbitrary model with minimum user interaction. This can be useful to people without experience in the animation field or to experienced people to accelerate the rigging process from days to hours or minutes depending the needed quality. Having in mind this situation we have designed our method as a set of tools that can be applied to general input models defined by an artist. The contributions made in the development of this thesis can be summarized as: • Generation of an animation Rig: Having an arbitrary closed mesh we have implemented a thinning method to create first an unrefined geometry skeleton that captures the topology and pose of the input character. Using this geometric skeleton as starting point we use a refining method that creates an adjusted logic skeleton based in a template, or may be defined by the user, that is compatible with the current animation formats. The output logic skeleton is specific for each character, and it is bounded to the input mesh to create an animation rig. • Skinning: Having defined an animation rig for an arbitrary mesh we have developed an improved skinning method; this method is based on the Linear Blend Skinning(LBS) algorithm. Our contributions in the skinning field can be sub-divided in: – We propose a segmentation method that works as the core element in a weight assigning algorithm and a skinning lgorithm, we also have developed an automatic algorithm to compute the skin weights of the LBS Skinning of a rigged polygonal mesh. – Our proposed skinning algorithm uses as base the features of the LBS Skinning. The main purpose of the developed algorithm is to solve the well-known ”candy wrap” artifact; that produces a substantial loss of volume when a link of an animation skeleton is rotated over its own axis. We have compared our results with the most important methods in the skinning field, such as Dual Quaternion Skinning (DQS) and LBS, achieving a better performance over DQS and an improvement in quality over LBS. • Animation tools: We have developed a set of Autodesk Maya commands that works together as rig tool, using our previous proposed methods. • Animation loader: Moreover, an animation loader tool has been implemented, that allows the user to load animations from a skeleton with different structure to a rigged 3D model. The contributions previously described has been published in 3 research papers, the first two were presented in international congresses and the third one was acepted for its publication in an JCR indexed journal.En el campo de la animación por computadora el proceso de crear un personaje de animación es comúnmente una tarea larga y tediosa. Un personaje de animación está definido usualmente por una malla tridimensional (un conjunto de triángulos en el espacio) que le dan su apariencia externa y forma al personaje. Es igualmente común que este tenga una estructura interna, un esqueleto de animación. Cuando un esqueleto esta asociado con una malla tridimensional, a esta asociación se le llama ligado de esqueleto, y un esqueleto ligado a la mallade un personaje es conocido en inglés como "animation rig" (el conjunto de elementos necesarios, que unidos sirven para animar un personaje). Hacer el rigging desde cero de un personaje puede ser un proceso muy tedioso. La definición y creación de un esqueleto centrado en la malla junto con el "pintado" por medio de un artista de los parámetros de influencia entre el esqueleto y la malla 3D (lo que se conoce como skinning) es la parte mas demandante para alcanzar un compartimiento aceptable al deformase (moverse) la malla de un personaje. Los métodos de rigging automáticos consisten en tomar una malla tridimensional como elemento de entrada, generar un esqueleto basado en la forma del modelo original, ligar la malla de entrada al esqueleto generado y finamente calcular el conjunto de parámetros utilizados por el método de skinning elegido. El principal objetivo de esta tesis es el generar un método de rigging para un modelo tridimensional arbitrario con una interacción mínima del usuario. Este método puede ser útil para gente con poca experiencia en el campo de la animación, o para gente experimentada que quiera acelerar el proceso de rigging de días a horas o inclusive minutos, dependiendo de la calidad requerida. Teniendo en mente esta situación, hemos diseñado nuestro método como un conjunto de herramientas las cuales pueden ser aplicadas a modelos de entrada generados por cualquier artista. Las contribuciones hechas en el desarrollo de esta tesis pueden resumirse a: -Generación de un rig de animación: Teniendo una malla cerrada cualquiera, hemos implementado un método para crear primero un esqueleto geométrico sin refinar, el cual capture la pose y la topología del personaje usado como elemento de entrada. Tomando este esqueleto geométrico como punto de partida usamos un método de refinado que crea un "esqueleto lógico" adaptado a la forma del geométrico basándonos en una plantilla definida por el usuario o previamente definida, que sea compatible con los formatos actuales de animación. El esqueleto lógico generado será especifico para cada personaje, y esta ligado a la malla de entrada para así crear un rig de animación. - Skinning: Teniendo definido un rig de animación para una malla de entrada arbitraria, hemos desarrollado un método mejorado de skinning, este método sera basado en el algoritmo "Linear Blending Skinnig" (algoritmo de skinning por combinación lineal, LBS por sus siglas en inglés). Nuestras contribuciones en el campo del skinnig son: - Proponemos un nuevo método de segmentación de mallas que sea la parte medular para algoritmos de asignación automática de pesos y de skinning, también hemos desarrollado un algoritmo automático que calcule los pesos utilizados por el algoritmo LBS para una malla poligonal que tenga un rig de animación. - Nuestro algoritmo de skinning propuesto usará como base las características del algoritmo LBS. El principal propósito del algoritmo desarrollado es el solucionar el defecto conocido como "envoltura de caramelo" (candy wrapper artifact), que produce una substancial perdida de volumen al rotar una de las articulaciones del esqueleto de animación sobre su propio eje. Nuestros resultados son comparados con los métodos mas importantes en el campo del skinning tal como Cuaterniones Duales (Dual Quaternions Skinning, DQS) y LBS, alcanzando un mejor desempeño que DQS y una mejora importante sobre LBSPostprint (published version

    "Studio e implementazione di un algoritmo per lo skinning automatico di mesh poligonali deformabili"

    Get PDF
    L’animazione tridimensionale è un argomento di grande interesse in svariati ambiti, che vanno dal puro intrattenimento a più serie simulazioni. Esistono numerosissime tecniche di animazione che bilanciano in modo diverso prestazioni, qualità dei risultati, semplicità dell’approccio e possibilità di riutilizzo. In questo scenario trova uno spazio tutto suo l’animazione di figure umanoidi, i cosiddetti Virtual Humans (VHs). Da circa 50 anni la ricerca ha prodotto una gran quantità di metodi per trattare VHs, adatti alle più diverse esigenze e occupandosi di riprodurre le molteplici sfaccettature del movimento umano. Tra i molti campi di applicazione dell’animazione umana, citiamo alcuni esempi: la simulazione per l’addestramento nell’eseguire operazioni pericolose, difficili o comunque che comporterebbero alti costi; l’analisi dell’interazione umana con oggetti ed ambienti; la creazione di attori virtuali per l’entertainment; lo studio di nuovi metodi di interazione uomo-macchina; le ricostruzioni di attività umane a scopo didattico o per riprodurre e studiare le dinamiche di eventi. Tra i metodi per modellare il movimento di figure umane, uno dei paradigmi più diffusi si basa sull’utilizzo di due elementi: il primo è un’approssimazione dello scheletro (skeleton) che viene usato per descrivere le animazioni in modo indipendente dalla figura da animare, così da poter essere riutilizzabile con diversi modelli; il secondo è il modello da animare, che viene visto come una superficie detta pelle (skin), la quale si modifica seguendo lo skeleton. Un approccio per far si che la skin si deformi secondo le ossa (bones) dello skeleton è chiamato skinning. L’algoritmo più diffuso per effettuare lo skinning nel campo delle applicazioni real-time è noto sotto il nome di Linear Blend Skinning (LBS). Tale approccio consiste nell’associare ad ogni vertice della skin un peso che varia tra 0 e 1 per ogni bones, di modo che più il bone ha influenza sul vertice più tale peso è vicino a 1. I pesi vengono poi usati in un blend lineare di trasformazioni rigide per deformare la skin. Il settaggio dei pesi per ogni vertice avviene generalmente in modo manuale tramite l’ausilio di appositi tool, integrati in strumenti di modellazione. Quest’operazione comporta comunque un certo sforzo da parte del modellatore. Inoltre in certi applicativi si preferirebbe che la determinazione dei pesi avvenisse in modo totalmente automatico e con buoni risultati visivi, senza la necessità di intervento da parte di un modellatore esperto. In questa tesi si analizzano i metodi più noti per il calcolo automatico dei pesi da utilizzare assieme all’LBS, di modo da ottenere una distribuzione dei pesi il più realistica possibile

    Proceedings of the 7th International Conference on Functional-Structural Plant Models, Saariselkä, Finland, 9 - 14 June 2013

    Get PDF

    Real-time Immersive human-computer interaction based on tracking and recognition of dynamic hand gestures

    Get PDF
    With fast developing and ever growing use of computer based technologies, human-computer interaction (HCI) plays an increasingly pivotal role. In virtual reality (VR), HCI technologies provide not only a better understanding of three-dimensional shapes and spaces, but also sensory immersion and physical interaction. With the hand based HCI being a key HCI modality for object manipulation and gesture based communication, challenges are presented to provide users a natural, intuitive, effortless, precise, and real-time method for HCI based on dynamic hand gestures, due to the complexity of hand postures formed by multiple joints with high degrees-of-freedom, the speed of hand movements with highly variable trajectories and rapid direction changes, and the precision required for interaction between hands and objects in the virtual world. Presented in this thesis is the design and development of a novel real-time HCI system based on a unique combination of a pair of data gloves based on fibre-optic curvature sensors to acquire finger joint angles, a hybrid tracking system based on inertia and ultrasound to capture hand position and orientation, and a stereoscopic display system to provide an immersive visual feedback. The potential and effectiveness of the proposed system is demonstrated through a number of applications, namely, hand gesture based virtual object manipulation and visualisation, hand gesture based direct sign writing, and hand gesture based finger spelling. For virtual object manipulation and visualisation, the system is shown to allow a user to select, translate, rotate, scale, release and visualise virtual objects (presented using graphics and volume data) in three-dimensional space using natural hand gestures in real-time. For direct sign writing, the system is shown to be able to display immediately the corresponding SignWriting symbols signed by a user using three different signing sequences and a range of complex hand gestures, which consist of various combinations of hand postures (with each finger open, half-bent, closed, adduction and abduction), eight hand orientations in horizontal/vertical plans, three palm facing directions, and various hand movements (which can have eight directions in horizontal/vertical plans, and can be repetitive, straight/curve, clockwise/anti-clockwise). The development includes a special visual interface to give not only a stereoscopic view of hand gestures and movements, but also a structured visual feedback for each stage of the signing sequence. An excellent basis is therefore formed to develop a full HCI based on all human gestures by integrating the proposed system with facial expression and body posture recognition methods. Furthermore, for finger spelling, the system is shown to be able to recognise five vowels signed by two hands using the British Sign Language in real-time

    Injury and Skeletal Biomechanics

    Get PDF
    This book covers many aspects of Injury and Skeletal Biomechanics. As the title represents, the aspects of force, motion, kinetics, kinematics, deformation, stress and strain are examined in a range of topics such as human muscles and skeleton, gait, injury and risk assessment under given situations. Topics range from image processing to articular cartilage biomechanical behavior, gait behavior under different scenarios, and training, to musculoskeletal and injury biomechanics modeling and risk assessment to motion preservation. This book, together with "Human Musculoskeletal Biomechanics", is available for free download to students and instructors who may find it suitable to develop new graduate level courses and undergraduate teaching in biomechanics

    Real-time immersive human-computer interaction based on tracking and recognition of dynamic hand gestures

    Get PDF
    With fast developing and ever growing use of computer based technologies, human-computer interaction (HCI) plays an increasingly pivotal role. In virtual reality (VR), HCI technologies provide not only a better understanding of three-dimensional shapes and spaces, but also sensory immersion and physical interaction. With the hand based HCI being a key HCI modality for object manipulation and gesture based communication, challenges are presented to provide users a natural, intuitive, effortless, precise, and real-time method for HCI based on dynamic hand gestures, due to the complexity of hand postures formed by multiple joints with high degrees-of-freedom, the speed of hand movements with highly variable trajectories and rapid direction changes, and the precision required for interaction between hands and objects in the virtual world. Presented in this thesis is the design and development of a novel real-time HCI system based on a unique combination of a pair of data gloves based on fibre-optic curvature sensors to acquire finger joint angles, a hybrid tracking system based on inertia and ultrasound to capture hand position and orientation, and a stereoscopic display system to provide an immersive visual feedback. The potential and effectiveness of the proposed system is demonstrated through a number of applications, namely, hand gesture based virtual object manipulation and visualisation, hand gesture based direct sign writing, and hand gesture based finger spelling. For virtual object manipulation and visualisation, the system is shown to allow a user to select, translate, rotate, scale, release and visualise virtual objects (presented using graphics and volume data) in three-dimensional space using natural hand gestures in real-time. For direct sign writing, the system is shown to be able to display immediately the corresponding SignWriting symbols signed by a user using three different signing sequences and a range of complex hand gestures, which consist of various combinations of hand postures (with each finger open, half-bent, closed, adduction and abduction), eight hand orientations in horizontal/vertical plans, three palm facing directions, and various hand movements (which can have eight directions in horizontal/vertical plans, and can be repetitive, straight/curve, clockwise/anti-clockwise). The development includes a special visual interface to give not only a stereoscopic view of hand gestures and movements, but also a structured visual feedback for each stage of the signing sequence. An excellent basis is therefore formed to develop a full HCI based on all human gestures by integrating the proposed system with facial expression and body posture recognition methods. Furthermore, for finger spelling, the system is shown to be able to recognise five vowels signed by two hands using the British Sign Language in real-time.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
    corecore