214 research outputs found
A 3d talking head for mobile devices based on unofficial ios webgl support
In this paper we present the implementation of a WebGL Talking Head for iOS mobile devices (Apple iPhone and iPad). It works on standard MPEG-4 Facial Animation Parameters (FAPs) and speaks with the Italian version of FESTIVAL TTS. It is totally based on true real human data. The 3D kinematics information are used to create lips articulatory model and to drive directly the talking face, generating human facial movements. In the last year we developed the WebGL version of the avatar. WebGL, which is 3D graphic for the web, is currently supported in the major web browsers for desktop computers. No official support has been given for mobile device main platforms yet, although the Firefox beta version enables it on android phones. Starting from iOS 5 WebGL is enabled only for the advertisement library class (which is intended for placing ad-banners in applications). We have been able to use this feature to visualize and animate our WebGL talking head
LUCIA: An open source 3D expressive avatar for multimodal h.m.i.
LUCIA is an MPEG-4 facial animation system developed at ISTC-CNR . It works on standard Facial Animation Parameters and speaks with the Italian version of FESTIVAL TTS. To achieve an emotive/expressive talking head LUCIA was build from real human data physically extracted by ELITE optotracking movement analyzer. LUCIA can copy a real human by reproducing the movements of passive markers positioned on his face and recorded by the ELITE device or can be driven by an emotional XML tagged input text, thus realizing a true audio/visual emotive/expressive synthesis. Synchronization between visual and audio data is very important in order to create the correct WAV and FAP files needed for the animation. LUCIA\u27s voice is based on the ISTC Italian version of FESTIVAL-MBROLA packages, modified by means of an appropriate APML/VSML tagged language. LUCIA is available in two dif-ferent versions: an open source framework and the "work in progress" WebGL
A FACIAL ANIMATION FRAMEWORK WITH EMOTIVE/EXPRESSIVE CAPABILITIES
LUCIA is an MPEG-4 facial animation system developed at ISTC-CNR.. It works on standard Facial Animation Parameters and speaks with the Italian version of FESTIVAL TTS. To achieve an emotive/expressive talking head LUCIA was build from real human data physically extracted by ELITE optotracking movement analyzer. LUCIA can copy a real human by reproducing the movements of passive markers positioned on his face and recorded by the ELITE device or can be driven by an emotional XML tagged input text, thus realizing a true audio/visual emotive/expressive synthesis. Synchronization between visual and audio data is very important in order to create the correct WAV and FAP files needed for the animation. LUCIA\u27s voice is based on the ISTC Italian version of FESTIVAL-MBROLA packages, modified by means of an appropriate APML/VSML tagged language. LUCIA is available in two different versions: an open source framework and the "work in progress" WebG
Application-driven visual computing towards industry 4.0 2018
245 p.La Tesis recoge contribuciones en tres campos: 1. Agentes Virtuales Interactivos: autónomos, modulares, escalables, ubicuos y atractivos para el usuario. Estos IVA pueden interactuar con los usuarios de manera natural.2. Entornos de RV/RA Inmersivos: RV en la planificación de la producción, el diseño de producto, la simulación de procesos, pruebas y verificación. El Operario Virtual muestra cómo la RV y los Co-bots pueden trabajar en un entorno seguro. En el Operario Aumentado la RA muestra información relevante al trabajador de una manera no intrusiva. 3. Gestión Interactiva de Modelos 3D: gestión online y visualización de modelos CAD multimedia, mediante conversión automática de modelos CAD a la Web. La tecnología Web3D permite la visualización e interacción de estos modelos en dispositivos móviles de baja potencia.Además, estas contribuciones han permitido analizar los desafíos presentados por Industry 4.0. La tesis ha contribuido a proporcionar una prueba de concepto para algunos de esos desafíos: en factores humanos, simulación, visualización e integración de modelos
Application-driven visual computing towards industry 4.0 2018
245 p.La Tesis recoge contribuciones en tres campos: 1. Agentes Virtuales Interactivos: autónomos, modulares, escalables, ubicuos y atractivos para el usuario. Estos IVA pueden interactuar con los usuarios de manera natural.2. Entornos de RV/RA Inmersivos: RV en la planificación de la producción, el diseño de producto, la simulación de procesos, pruebas y verificación. El Operario Virtual muestra cómo la RV y los Co-bots pueden trabajar en un entorno seguro. En el Operario Aumentado la RA muestra información relevante al trabajador de una manera no intrusiva. 3. Gestión Interactiva de Modelos 3D: gestión online y visualización de modelos CAD multimedia, mediante conversión automática de modelos CAD a la Web. La tecnología Web3D permite la visualización e interacción de estos modelos en dispositivos móviles de baja potencia.Además, estas contribuciones han permitido analizar los desafíos presentados por Industry 4.0. La tesis ha contribuido a proporcionar una prueba de concepto para algunos de esos desafíos: en factores humanos, simulación, visualización e integración de modelos
Animation of a speaking chatbot: The Deep Speaking Avatar project
This thesis is a mixture of scientific research related to pre-developed technologies for animating virtual avatars and the actual development of an own application for the purpose. The topic was restricted by compatibility with other components of a chatbot pipeline, available techniques for computer graphics, and the required user experience for a Deep Speaking Avatar application. Other components in the project’s pipeline were implemented by four other students as their own bachelor’s theses. Two other components of the pipeline are relevant to our application, one of which is face recognition guiding the rotation of the avatar’s head. The other component dependency was the output of the text-to-speech module that would time the lip movements of the avatar.
Available techniques were first discovered for facial expression modeling and face-audio synchronization. Two of them were tested, but both proved to be too restricted to be animated and rotated in real-time. Both techniques were competent in their intended aspects but fulfilling all the requirements for the Deep Speaking Avatar required lowering the standards of the avatar’s realism and making an own implementation using general-purpose tools. A suitable amount of implementational freedom and technical abilities was offered by the popular 3D-rendering and development tool Unity. Another popular tool, Blender, was used to craft the 3D-mesh model that is animated and rendered in Unity.
The main desired functionalities of the avatar were achieved successfully in the specific technical conditions used to test the implementation. The built avatar can turn its head approximately in the direction of the user’s face, and the lips of the avatar move at a monotonic pace when the avatar produces speech. Relative locations of the camera, user, and screen should not differ much from the used test setup. However, ideas to improve the application’s adaptivity to different physical setups were discussed. A Linux environment with the other Avatar-pipeline components is required to run the Deep Speaking Avatar, and an integration script, written by another student can be used for setting up the pipeline. A good amount of computational power is required to run the whole system because many of the chatbot modules utilize heavy neural networks
USING SERIOUS GAMES DESIGNED THROUGH THE GAME ELC+ FRAMEWORK TO ENHANCE DEEP LEARNING IN HUMAN RESOURCES DEVELOPMENT
The traditional method of learning has been widely criticised for its limitations and inflexibility to application in non-educational settings. These observations about the traditional modes of learning have necessitated the contemplation and discovery of new approaches embracing technological tools that advances better learning experiences. Hence, new technological innovations, such as Stronger Game or Serious Games (SGs) have been embraced as more effective methods of achieving deep learning. The application of serious game has indeed, gained traction in both the formal educational and human resource (HR) settings, especially for employees’ training and development. Thus, the core question of this PhD research is hinged on whether the SGs are more effective in creating deep learning in adult learners, compared to the more traditional teaching methods. To respond to this query, the study examines the traditional and SGs learning approaches, in order to ascertain which is more effective in creating deep learning in adults, in addition to achieving human resource training and development. To guide the design and development of SGs to support adult DL, this research proposes a pedagogical framework referred to as the Game ELC+ framework that comprises four learning theories namely: The Game (Elements) within the Yu Kai Chou's Octalysis Framework; Bloom Taxonomy’s Player (Learning) Levels; (Cognitive) Theory of Multimedia Learning; and the Ruskov’s four evidence of Deep Learning (+). This framework provides the standard for measuring DL in the design of SGs.
The research instruments developed include a traditional andragogical test which uses e-Learning materials containing ten different learning scenarios in the context of workplace HR scenarios, and a digital Serious Game using exactly the same content and scenarios with the traditional andragogical test.
ANOVA was utilized as the data analytical approach for comparing the mean score of learners using serious games and the tradition eLearning platforms. The study hypothesised that deep learning can be achieved through the SGs and that it is more effective than the traditional andragogy. It further asserts that participants who used the SGs achieved a higher learning outcome than participants in traditional process. Participant observation during the testing phase suggests that the participants interacting with the SGs demonstrated high level of engagement and curiosity, when compared to participants who used the traditional eLearning platform. The study findings validate the hypotheses. By implication, the SGs designed according to the Game ELC+ framework results in improved learning outcomes. In summary, the findings claim that incorporating SG elements in HR training and development can improve professional practices and mitigate some of the challenges experienced by human resource in the traditional learning environment
Analysis of Visualisation and Interaction Tools Authors
This document provides an in-depth analysis of visualization and interaction tools employed in the context of Virtual Museum. This analysis is required to identify and design the tools and the different components that will be part of the Common Implementation Framework (CIF). The CIF will be the base of the web-based services and tools to support the development of Virtual Museums with particular attention to online Virtual Museum.The main goal is to provide to the stakeholders and developers an useful platform to support and help them in the development of their projects, despite the nature of the project itself. The design of the Common Implementation Framework (CIF) is based on an analysis of the typical workflow ofthe V-MUST partners and their perceived limitations of current technologies. This document is based also on the results of the V-MUST technical questionnaire (presented in the Deliverable 4.1). Based on these two source of information, we have selected some important tools (mainly visualization tools) and services and we elaborate some first guidelines and ideas for the design and development of the CIF, that shall provide a technological foundation for the V-MUST Platform, together with the V-MUST repository/repositories and the additional services defined in the WP4. Two state of the art reports, one about user interface design and another one about visualization technologies have been also provided in this document
- …