6 research outputs found
An investigation into the efficacy of avatar-based systems for student advice
Student support is an important function in all universities. Most students expect access to support 24/7, but support staff cannot be available at all times of day. This paper addresses this problem, describing the development of an avatar-based system to guide students through the materials provided by a university student employability service. Firstly, students and staff were surveyed to establish the demand for such a system. The system was then constructed. Finally, the system was evaluated by students and staff, which led to a clearer understanding of the optimal role for avatar-based systems and consequent improvements to the system’s functionality
An investigation into the efficacy of avatar-based systems for student advice
Student support is an important function in all universities. Most students expect access to support 24/7, but support staff cannot be available at all times of day. This paper addresses this problem, describing the development of an avatar-based system to guide students through the materials provided by a university student employability service. Firstly, students and staff were surveyed to establish the demand for such a system. The system was then constructed. Finally, the system was evaluated by students and staff, which led to a clearer understanding of the optimal role for avatar-based systems and consequent improvements to the system’s functionality
Lip syncing method for realistic expressive 3D face model
Lip synchronization of 3D face model is now being used in a multitude of important fields. It brings a more human, social and dramatic reality to computer games, films and interactive multimedia, and is growing in use and importance. High level of realism can be used in demanding applications such as computer games and cinema. Authoring lip syncing with complex and subtle expressions is still difficult and fraught with problems in terms of realism. This research proposed a lip syncing method of realistic expressive 3D face model. Animated lips requires a 3D face model capable of representing the myriad shapes the human face experiences during speech and a method to produce the correct lip shape at the correct time. The paper presented a 3D face model designed to support lip syncing that align with input audio file. It deforms using Raised Cosine Deformation (RCD) function that is grafted onto the input facial geometry. The face model was based on MPEG-4 Facial Animation (FA) Standard. This paper proposed a method to animate the 3D face model over time to create animated lip syncing using a canonical set of visemes for all pairwise combinations of a reduced phoneme set called ProPhone. The proposed research integrated emotions by the consideration of Ekman model and Plutchik’s wheel with emotive eye movements by implementing Emotional Eye Movements Markup Language (EEMML) to produce realistic 3D face model. © 2017 Springer Science+Business Media New Yor
Lip syncing method for realistic expressive three-dimensional face model
Lip synchronization of 3D face model is now being used in a multitude of important fields. It brings a more human and dramatic reality to computer games, films and interactive multimedia, and is growing in use and importance. High level realism can be used in demanding applications such as computer games and cinema. Authoring lip syncing with complex and subtle expressions is still difficult and fraught with problems in terms of realism. Thus, this study proposes a lip syncing method of realistic expressive 3D face model. Animated lips require a 3D face model capable of representing the movement of face muscles during speech and a method to produce the correct lip shape at the correct time. The 3D face model is designed based on MPEG-4 facial animation standard to support lip syncing that is aligned with input audio file. It deforms using Raised Cosine Deformation function that is grafted onto the input facial geometry. This study also proposes a method to animate the 3D face model over time to create animated lip syncing using a canonical set of visemes for all pairwise combinations of a reduced phoneme set called ProPhone. Finally, this study integrates emotions by considering both Ekman model and Plutchik’s wheel with emotive eye movements by implementing Emotional Eye Movements Markup Language to produce realistic 3D face model. The experimental results show that the proposed model can generate visually satisfactory animations with Mean Square Error of 0.0020 for neutral, 0.0024 for happy expression, 0.0020 for angry expression, 0.0030 for fear expression, 0.0026 for surprise expression, 0.0010 for disgust expression, and 0.0030 for sad expression
Enhanced facial expression using oxygenation absorption of facial skin
Facial skin appearance is affected by physical and physiological state of the skin. The facial expression especially the skin appearances are in constant mutability and dynamically changed as human behave, talk and stress. The color of skin is considered to be one of the key indicators for these symptoms. The skin color resolution is highly determined by the scattering and absorption of light within the skin layers. The concentration of chromophores in melanin and hemoglobin oxygenation in the blood plays a pivotal role. An improvement work on prior model to create a realistic textured three-dimensional (3D) facial model for animation is proposed. This thesis considers both surface and subsurface scattering capable of simulating the interaction of light with the human skin. Furthermore, six parameters are used in this research which are the amount of oxygenation, de-oxygenation, hemoglobin, melanin, oil and blend factor for different types of melanin in the skin to generate a perfect match to specific skin types. The proposed model is associated with Blend Shape Interpolation and Facial Action Coding System to create five basic facial emotional expressions namely anger, happy, neutral, sad and fear. Meanwhile, the correlation between blood oxygenation in changing facial skin color for basic natural emotional expressions are measured using the Pulse Oximetry and 3D skin analyzer. The data from different subjects with male and female under different number of partially extreme facial expressions are fed in the model for simulation. The multi-pole method for layered materials is used to calculate the spectral diffusion profiles of two-layered skin which are further utilized to simulate the subsurface scattering of light within the skin. While the subsurface scattering is further combined with the Torrance-Sparrow Bidirectional Reflectance Distribution Function (BRDF) model to simulate the interaction of light with an oily layer at the skin surface. The result is validated by an evaluation procedure for measuring the accountability of a facial model via expressions and skin color of proposed model to the real human. The facial expressions evaluation is verified by calculating Euclidean distance between the facial markers of the real human and the avatar. The second assessment validates the skin color of facial expressions for the proposed avatar via the extraction of Histogram Color Features and Color Coherence Vector of each image with the real human and the previous work. The experimental result shows around 5.12 percent improvement compared to previous work. In achieving the realistic facial expression for virtual human based on facial skin color, texture and oxygenation of hemoglobin, the result demonstrates that the proposed model is beneficial to the development of virtual reality and game environment of computer aided graphics animation systems
Efeitos do Design Emocional de um avatar na interação com Chatbots
Nos últimos anos tem existido um elevado interesse na
tecnologia dos chatbots uma vez que estes conseguem,
através da inteligência artificial, interpretar e comunicar
com as pessoas de uma forma semelhante à de um humano.
Para as empresas, o valor deste tipo de tecnologia é enorme,
uma vez que permite que estas estejam em contacto com os
seus clientes em todo o mundo a toda a hora e mantendo
múltiplas conversas em simultâneo. Para as pessoas, esta
tecnologia pode ser da mesma forma valiosa uma vez que
permite coisas como o acesso à informação, suporte em
tarefas, entretenimento, entre outras. Contudo, a adesão a
esta tecnologia por parte das pessoas está longe da
esperada. Um dos fatores para esta falta de adesão é o
desafio com o qual os chatbots se deparam que é a
dificuldade que estes têm de compreender e reproduzir um
discurso totalmente natural que efetivamente se assemelhe a
um humano. Este fator e outros levam a uma falta de
humanização que acaba por detrair as pessoas da utilização
desta tecnologia. Deste modo, este estudo teve como foco
perceber qual o impacto de estratégias de design emocional
para o aumento da adesão a esta tecnologia, uma vez que o
design emocional é uma forma de tornar a tecnologia mais
humana. Para isso focámos este estudo na humanização do
chatbot através de um avatar exprimindo um estado
emocional (humor vs tristeza), colocando como hipótese
que a inclusão destes dois elementos no design aumentaria
a adesão aos chatbots. Para testar esta hipótese o nosso
estudo foi dividido em duas fases. Uma primeira onde
foram criadas, selecionadas e avaliadas as propostas visuais
quanto ao seu grau de tristeza e humor. Por fim uma
segunda fase, onde as duas propostas visuais e uma
adicional neutra foram aplicadas num protótipo de uma
plataforma simulada de e-commerce. Os resultados deste
estudo sugerem que a aplicação de estratégias de design
emocional, como as que aplicámos, podem ser eficazes para
aumentar a adesão à tecnologia dos chatbots.In recent years there has been an high interest in chatbot
technology as it can, through artificial intelligence, interpret
and communicate with people in a similar way to a human.
For companies, the value of this type of technology is
enormous as it allows them to be in touch with their customers
around the world at all times and hold multiple conversations
simultaneously. For the people, this technology can be equally
valuable as it enables things like access to information, task
support, entertainment, among others. However, the uptake of
this technology by people is far from what is expected. One of
the reasons for this lack of adhesion is the challenge that
chatbots face, which is the difficulty they have in understanding
and reproducing a natural speech that effectively resembles a
human. This factor and others lead to a lack humanisation that
ends up deterring people from using this technology. Thus, this
study focused on understanding the impact of Emotional
Design strategies to increase adherence to this technology,
since Emotional Design is a way to make technology more
human. To this end, we focused this study on the humanisation
of the chatbots through an avatar expressing an emotional state
(humour vs sadness), hypothesising that the inclusion of these
two elements in the design would increase adherence to
chatbots. To test this hypothesis our study was divided in two
phases. A first one where visual proposals where developed,
selected and evaluated regarding their degree of sadness and
humour. Finally, a second phase, where the two visual
proposals (sadness and humour) and an additional neutral one
were applied to a prototype of a simulated e-commerce
platform. The results of this study suggest that the application
of Emotional Design strategies, such as the ones we applied,
can be effective in increasing the adoption of the chatbot
technology