2 research outputs found

    Understanding aesthetics in a virtual environment performance. A

    Get PDF
    The virtual performance is a form of art that simultaneously develops with information technology, as IT provides the flexibility to develop sophisticated design Systems for the artist. Moreover, the intrinsic relationship between art and technology is apparent from the concluding research results. This research aimed to investigate the aesthetical value of VEs performances. The purpose of the study was to confront the location of aesthe tics in VEs. The qualitative method was employed due to the attempt to control the investigated objective. Literature review was employed due to the necessity to understand the VEs aesthetic phenomena in their entirely for developing a complete picture of the research field. Case studies and observation were mainly used because of the type of research conducted. The resulting findings were taken into consideration or rejected through interviews with creators of virtual performances. The research took place in three stages. The first step was to determine the research aims and objectives. The second, was to design the research plan which was divided along three basic axes. The first refers to the historical review and development of visual arts in order to determine the characteristics of the investigated art form. The second axis was the comprehension of the aesthetics that are produced via the determined characteristics. More specifically, these are interactivity, the interrupted flow of information and the audience participation. The third stage was the attempt to identify the elements that characterise a virtual performance. How the artist can handle the interactive element and- create conditions of immersion for his audience. The manifesto of virtual performances was created through the course of research and the analysis of the findings that belong to the third stage, which also includes the data analysis. Another element that also emerged was of the audience's interaction with the performance's development. This element, is in itself a product of aesthetics that has a great influence on the progression pf the thought processes of the audiences that interact with a virtual performance. The creator requires a spectator that is an active participant in order to develop the performance's plot. This does not indicate that the creator can manipulate the audience as a tool because each spectator has his own thoughts and critical evaluations. The spectator simply handles and combines according to his choices the elements that the artist offers so that he can project and co-create the performance's plot. The more the spectator experiences virtual performances through his interaction, the more lie will gain knowledge and freedom which will result in virtual performances to offer a larger selection and more powerful experiences. Besides, this art form is still in its embryonic stage and its maturity promises even greater developments

    Spatio-temporal centroid based sign language facial expressions for animation synthesis in virtual environment

    Get PDF
    Orientador: Eduardo TodtTese (doutorado) - Universidade Federal do Paraná, Setor de Ciências Exatas, Programa de Pós-Graduação em Informática. Defesa : Curitiba, 20/02/2019Inclui referências: p.86-97Área de concentração: Ciência da ComputaçãoResumo: Formalmente reconhecida como segunda lingua oficial brasileira, a BSL, ou Libras, conta hoje com muitas aplicacoes computacionais que integram a comunidade surda nas atividades cotidianas, oferecendo interpretes virtuais representados por avatares 3D construidos utilizando modelos formais que parametrizam as caracteristicas especificas das linguas de sinais. Estas aplicacoes, contudo, ainda consideram expressoes faciais como recurso de segundo plano em uma lingua primariamente gestual, ignorando a importancia que expressoes faciais e emocoes imprimem no contexto da mensagem transmitida. Neste trabalho, a fim de definir um modelo facial parametrizado para uso em linguas de sinais, um sistema de sintese de expressoes faciais atraves de um avatar 3D e proposto e um prototipo implementado. Neste sentido, um modelo de landmarks faciais separado por regioes e definido assim como uma modelagem de expressoes base utilizando as bases faciais AKDEF e JAFEE como referencia. Com este sistema e possivel representar expressoes complexas utilizando interpolacao dos valores de intensidade na animacao geometrica, de forma simplificada utilizando controle por centroides e deslocamento de regioes independentes no modelo 3D. E proposto ainda uma aplicacao de modelo espaco-temporal para os landmarks faciais, com o objetivo de observar o comportamento e relacao dos centroides na sintese das expressoes base definindo quais pontos geometricos sao relevantes no processo de interpolacao e animacao das expressoes. Um sistema de exportacao dos dados faciais seguindo o formato hierarquico utilizado na maioria dos avatares 3D interpretes de linguas de sinais e desenvolvido, incentivando a integracao em modelos formais computacionais ja existentes na literatura, permitindo ainda a adaptacao e alteracao de valores e intensidades na representacao das emocoes. Assim, os modelos e conceitos apresentados propoe a integracao de um modeo facial para representacao de expressoes na sintese de sinais oferecendo uma proposta simplificada e otimizada para aplicacao dos recursos em avatares 3D. Palavras-chave: Avatar 3D, Dados Espaco-Temporal, Libras, Lingua de sinais, Expressoes Faciais.Abstract: Formally recognized as the second official Brazilian language, BSL, or Libras, today has many computational applications that integrate the deaf community into daily activities, offering virtual interpreters represented by 3D avatars built using formal models that parameterize the specific characteristics of sign languages. These applications, however, still consider facial expressions as a background feature in a primarily gestural language, ignoring the importance that facial expressions and emotions imprint on the context of the transmitted message. In this work, in order to define a parametrized facial model for use in sign languages, a system of synthesis of facial expressions through a 3D avatar is proposed and a prototype implemented. In this way, a model of facial landmarks separated by regions is defined as a modeling of base expressions using the AKDEF and JAFEE facial bases as a reference. With this system it is possible to represent complex expressions using interpolation of the intensity values in the geometric animation, in a simplified way using control by centroids and displacement of independent regions in the 3D model. A spatial-temporal model is proposed for the facial landmarks, with the objective of define the behavior and relation of the centroids in the synthesis of the basic expressions, pointing out which geometric landmark are relevant in the process of interpolation and animation of the expressions. A system for exporting facial data following the hierarchical format used in most avatars 3D sign language interpreters is developed, encouraging the integration in formal computer models already existent in the literature, also allowing the adaptation and change of values and intensities in the representation of the emotions. Thus, the models and concepts presented propose the integration of a facial model to represent expressions in the synthesis of signals offering a simplified and optimized proposal for the application of the resources in 3D avatars. Keywords: 3D Avatar, Spatio-Temporal Data, BSL, Sign Language, Facial Expression
    corecore