281 research outputs found

    Exporting Vector Muscles for Facial Animation

    Get PDF
    In this paper we introduce a method of exporting vector muscles from one 3D face to another for facial animation. Starting from a 3D face with an extended version of Waters' linear muscle system, we transfer the linear muscles to a target 3D face. We also transfer the region division, which is used to increase the performance of the muscle as well as to control the animation. The human involvement is just as simple as selecting the faces which shows the most natural facial expressions in the animator's view. The method allows the transfer of the animation to a new 3D model within a short time. The transferred muscles can then be used to create new animations

    Media Data Processing

    Get PDF
    Tato práce popisuje proces tvorby multi-platformního interaktivního systému a demonstruje jeho použitelnost na prototypu aplikace simulující rehabilitaci. Je zde zahrnut SW návrh, 3D programování a produkce animací za použití hloubkového senzoru Kinect. Jádro aplikace bylo vytvořeno v herním enginu Unity 3D a bylo otestováno na Linux Ubuntu, Windows 7, mobilním zařízení Android a Unity Web Playeru. První polovina tohoto dokumentu obsahuje můj průzkum nejmodernějších nástrojů a zdrojů vhodných pro realizaci interaktivní multi-platformní aplikace, pracující s 3D obsahem. Nejprve jsou popsány tři herní enginy (Unity 3D, UDK, Unreal Engine 4), jakožto hlavní vývojová prostředí vhodná k realizaci aplikace. Poté jsou popsány možnosti ukládání dat obecné pro všechny zmíněné enginy - integrovaná úložiště dat, databázové systémy a serializace. Nakonec jsou v práci popsány výhody a nevýhody různých přístupů, jak vytvořit 3D animace. Je zde naznačeno použití 3D softwaru určeného k manuální produkci, ale také různé možnosti snímání pohybu Motion Capture. Druhá část dokumentu popisuje mé zhodnocení informací popsaných v předchozí části a rozhodnutí, které prostředky jsem použil k vytvoření prototypu. Také je zde zkonkretizováno zadání a formulovány základní parametry aplikace. Dále je popsán SW návrh, produkce a zpracování 3D animací a samotná implementace. Nakonec je výsledný prototyp zhodnocen a naznačena rozšíření, které je možné aplikovat v budoucnosti. Co se týče využití samotné aplikace, tato práce předvádí mé nápady, jak mohou být informační technologie využity v oboru fyzioterapie a zdravotnictví obecně. Výsledkem nemělo být nahrazení fyzioterapeutů, nýbrž poskytnutí pomoci jim a jejich pacientům. Tento prototyp je příkladem, který nabízí 3D vizualizaci cvičení a lidského pohybového systému. 3D animace, ukazující, jak by měl být každý cvik správně proveden, mimo jiné, mohou učinit rehabilitace příjemnější a zároveň účinnější.This work describes a process of creating a cross-platform interactive system and it demonstrates usability on prototype application that simulates a rehabilitation session. The application includes 3D visualization possibilities - interactive human muscular system and exercises with 3D animations. The work included SW design, 3D programming and production of animations using Kinect sensor. The core of this prototype was created in Unity 3D game engine. It was deployed and tested on Linux Ubuntu, Windows 7, Android mobile device and Unity Web Player. This work as well manifests my ideas how Information Technology can be used in field of physical therapy.

    Analysis and Construction of Engaging Facial Forms and Expressions: Interdisciplinary Approaches from Art, Anatomy, Engineering, Cultural Studies, and Psychology

    Get PDF
    The topic of this dissertation is the anatomical, psychological, and cultural examination of a human face in order to effectively construct an anatomy-driven 3D virtual face customization and action model. In order to gain a broad perspective of all aspects of a face, theories and methodology from the fields of art, engineering, anatomy, psychology, and cultural studies have been analyzed and implemented. The computer generated facial customization and action model were designed based on the collected data. Using this customization system, culturally-specific attractive face in Korean popular culture, “kot-mi-nam (flower-like beautiful guy),” was modeled and analyzed as a case study. The “kot-mi-nam” phenomenon is overviewed in textual, visual, and contextual aspects, which reveals the gender- and sexuality-fluidity of its masculinity. The analysis and the actual development of the model organically co-construct each other requiring an interwoven process. Chapter 1 introduces anatomical studies of a human face, psychological theories of face recognition and an attractive face, and state-of-the-art face construction projects in the various fields. Chapter 2 and 3 present the Bezier curve-based 3D facial customization (BCFC) and Multi-layered Facial Action Model (MFAF) based on the analysis of human anatomy, to achieve a cost-effective yet realistic quality of facial animation without using 3D scanned data. In the experiments, results for the facial customization for gender, race, fat, and age showed that BCFC achieved enhanced performance of 25.20% compared to existing program Facegen , and 44.12% compared to Facial Studio. The experimental results also proved the realistic quality and effectiveness of MFAM compared with blend shape technique by enhancing 2.87% and 0.03% of facial area for happiness and anger expressions per second, respectively. In Chapter 4, according to the analysis based on BCFC, the 3D face of an average kot-mi-nam is close to gender neutral (male: 50.38%, female: 49.62%), and Caucasian (66.42-66.40%). Culturally-specific images can be misinterpreted in different cultures, due to their different languages, histories, and contexts. This research demonstrates that facial images can be affected by the cultural tastes of the makers and can also be interpreted differently by viewers in different cultures

    Generating anatomical substructures for physically-based facial animation.

    Get PDF
    Physically-based facial animation techniques are capable of producing realistic facial deformations, but have failed to find meaningful use outside the academic community because they are notoriously difficult to create, reuse, and art-direct, in comparison to other methods of facial animation. This thesis addresses these shortcomings and presents a series of methods for automatically generating a skull, the superficial musculoaponeurotic system (SMAS – a layer of fascia investing and interlinking the mimic muscle system), and mimic muscles for any given 3D face model. This is done toward (the goal of) a production-viable framework or rig-builder for physically-based facial animation. This workflow consists of three major steps. First, a generic skull is fitted to a given head model using thin-plate splines computed from the correspondence between landmarks placed on both models. Second, the SMAS is constructed as a variational implicit or radial basis function surface in the interface between the head model and the generic skull fitted to it. Lastly, muscle fibres are generated as boundary-value straightest geodesics, connecting muscle attachment regions defined on the surface of the SMAS. Each step of this workflow is developed with speed, realism and reusability in mind

    Application-driven visual computing towards industry 4.0 2018

    Get PDF
    245 p.La Tesis recoge contribuciones en tres campos: 1. Agentes Virtuales Interactivos: autónomos, modulares, escalables, ubicuos y atractivos para el usuario. Estos IVA pueden interactuar con los usuarios de manera natural.2. Entornos de RV/RA Inmersivos: RV en la planificación de la producción, el diseño de producto, la simulación de procesos, pruebas y verificación. El Operario Virtual muestra cómo la RV y los Co-bots pueden trabajar en un entorno seguro. En el Operario Aumentado la RA muestra información relevante al trabajador de una manera no intrusiva. 3. Gestión Interactiva de Modelos 3D: gestión online y visualización de modelos CAD multimedia, mediante conversión automática de modelos CAD a la Web. La tecnología Web3D permite la visualización e interacción de estos modelos en dispositivos móviles de baja potencia.Además, estas contribuciones han permitido analizar los desafíos presentados por Industry 4.0. La tesis ha contribuido a proporcionar una prueba de concepto para algunos de esos desafíos: en factores humanos, simulación, visualización e integración de modelos

    Spatio-temporal centroid based sign language facial expressions for animation synthesis in virtual environment

    Get PDF
    Orientador: Eduardo TodtTese (doutorado) - Universidade Federal do Paraná, Setor de Ciências Exatas, Programa de Pós-Graduação em Informática. Defesa : Curitiba, 20/02/2019Inclui referências: p.86-97Área de concentração: Ciência da ComputaçãoResumo: Formalmente reconhecida como segunda lingua oficial brasileira, a BSL, ou Libras, conta hoje com muitas aplicacoes computacionais que integram a comunidade surda nas atividades cotidianas, oferecendo interpretes virtuais representados por avatares 3D construidos utilizando modelos formais que parametrizam as caracteristicas especificas das linguas de sinais. Estas aplicacoes, contudo, ainda consideram expressoes faciais como recurso de segundo plano em uma lingua primariamente gestual, ignorando a importancia que expressoes faciais e emocoes imprimem no contexto da mensagem transmitida. Neste trabalho, a fim de definir um modelo facial parametrizado para uso em linguas de sinais, um sistema de sintese de expressoes faciais atraves de um avatar 3D e proposto e um prototipo implementado. Neste sentido, um modelo de landmarks faciais separado por regioes e definido assim como uma modelagem de expressoes base utilizando as bases faciais AKDEF e JAFEE como referencia. Com este sistema e possivel representar expressoes complexas utilizando interpolacao dos valores de intensidade na animacao geometrica, de forma simplificada utilizando controle por centroides e deslocamento de regioes independentes no modelo 3D. E proposto ainda uma aplicacao de modelo espaco-temporal para os landmarks faciais, com o objetivo de observar o comportamento e relacao dos centroides na sintese das expressoes base definindo quais pontos geometricos sao relevantes no processo de interpolacao e animacao das expressoes. Um sistema de exportacao dos dados faciais seguindo o formato hierarquico utilizado na maioria dos avatares 3D interpretes de linguas de sinais e desenvolvido, incentivando a integracao em modelos formais computacionais ja existentes na literatura, permitindo ainda a adaptacao e alteracao de valores e intensidades na representacao das emocoes. Assim, os modelos e conceitos apresentados propoe a integracao de um modeo facial para representacao de expressoes na sintese de sinais oferecendo uma proposta simplificada e otimizada para aplicacao dos recursos em avatares 3D. Palavras-chave: Avatar 3D, Dados Espaco-Temporal, Libras, Lingua de sinais, Expressoes Faciais.Abstract: Formally recognized as the second official Brazilian language, BSL, or Libras, today has many computational applications that integrate the deaf community into daily activities, offering virtual interpreters represented by 3D avatars built using formal models that parameterize the specific characteristics of sign languages. These applications, however, still consider facial expressions as a background feature in a primarily gestural language, ignoring the importance that facial expressions and emotions imprint on the context of the transmitted message. In this work, in order to define a parametrized facial model for use in sign languages, a system of synthesis of facial expressions through a 3D avatar is proposed and a prototype implemented. In this way, a model of facial landmarks separated by regions is defined as a modeling of base expressions using the AKDEF and JAFEE facial bases as a reference. With this system it is possible to represent complex expressions using interpolation of the intensity values in the geometric animation, in a simplified way using control by centroids and displacement of independent regions in the 3D model. A spatial-temporal model is proposed for the facial landmarks, with the objective of define the behavior and relation of the centroids in the synthesis of the basic expressions, pointing out which geometric landmark are relevant in the process of interpolation and animation of the expressions. A system for exporting facial data following the hierarchical format used in most avatars 3D sign language interpreters is developed, encouraging the integration in formal computer models already existent in the literature, also allowing the adaptation and change of values and intensities in the representation of the emotions. Thus, the models and concepts presented propose the integration of a facial model to represent expressions in the synthesis of signals offering a simplified and optimized proposal for the application of the resources in 3D avatars. Keywords: 3D Avatar, Spatio-Temporal Data, BSL, Sign Language, Facial Expression

    Muscle activation mapping of skeletal hand motion: an evolutionary approach.

    Get PDF
    Creating controlled dynamic character animation consists of mathe- matical modelling of muscles and solving the activation dynamics that form the key to coordination. But biomechanical simulation and control is com- putationally expensive involving complex di erential equations and is not suitable for real-time platforms like games. Performing such computations at every time-step reduces frame rate. Modern games use generic soft- ware packages called physics engines to perform a wide variety of in-game physical e ects. The physics engines are optimized for gaming platforms. Therefore, a physics engine compatible model of anatomical muscles and an alternative control architecture is essential to create biomechanical charac- ters in games. This thesis presents a system that generates muscle activations from captured motion by borrowing principles from biomechanics and neural con- trol. A generic physics engine compliant muscle model primitive is also de- veloped. The muscle model primitive forms the motion actuator and is an integral part of the physical model used in the simulation. This thesis investigates a stochastic solution to create a controller that mimics the neural control system employed in the human body. The control system uses evolutionary neural networks that evolve its weights using genetic algorithms. Examples and guidance often act as templates in muscle training during all stages of human life. Similarly, the neural con- troller attempts to learn muscle coordination through input motion samples. The thesis also explores the objective functions developed that aids in the genetic evolution of the neural network. Character interaction with the game world is still a pre-animated behaviour in most current games. Physically-based procedural hand ani- mation is a step towards autonomous interaction of game characters with the game world. The neural controller and the muscle primitive developed are used to animate a dynamic model of a human hand within a real-time physics engine environment

    Facial Modelling and animation trends in the new millennium : a survey

    Get PDF
    M.Sc (Computer Science)Facial modelling and animation is considered one of the most challenging areas in the animation world. Since Parke and Waters’s (1996) comprehensive book, no major work encompassing the entire field of facial animation has been published. This thesis covers Parke and Waters’s work, while also providing a survey of the developments in the field since 1996. The thesis describes, analyses, and compares (where applicable) the existing techniques and practices used to produce the facial animation. Where applicable, the related techniques are grouped in the same chapter and described in a chronological fashion, outlining their differences, as well as their advantages and disadvantages. The thesis is concluded by exploratory work towards a talking head for Northern Sotho. Facial animation and lip synchronisation of a fragment of Northern Sotho is done by using software tools primarily designed for English.Computin

    Generating anatomical substructures for physically-based facial animation

    Get PDF
    Physically-based facial animation techniques are capable of producing realistic facial deformations, but have failed to find meaningful use outside the academic community because they are notoriously difficult to create, reuse, and art-direct, in comparison to other methods of facial animation. This thesis addresses these shortcomings and presents a series of methods for automatically generating a skull, the superficial musculoaponeurotic system (SMAS – a layer of fascia investing and interlinking the mimic muscle system), and mimic muscles for any given 3D face model. This is done toward (the goal of) a production-viable framework or rig-builder for physically-based facial animation. This workflow consists of three major steps. First, a generic skull is fitted to a given head model using thin-plate splines computed from the correspondence between landmarks placed on both models. Second, the SMAS is constructed as a variational implicit or radial basis function surface in the interface between the head model and the generic skull fitted to it. Lastly, muscle fibres are generated as boundary-value straightest geodesics, connecting muscle attachment regions defined on the surface of the SMAS. Each step of this workflow is developed with speed, realism and reusability in mind.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
    corecore