1,832 research outputs found

    Natural User Interfaces for Virtual Character Full Body and Facial Animation in Immersive Virtual Worlds

    Get PDF
    In recent years, networked virtual environments have steadily grown to become a frontier in social computing. Such virtual cyberspaces are usually accessed by multiple users through their 3D avatars. Recent scientific activity has resulted in the release of both hardware and software components that enable users at home to interact with their virtual persona through natural body and facial activity performance. Based on 3D computer graphics methods and vision-based motion tracking algorithms, these techniques aspire to reinforce the sense of autonomy and telepresence within the virtual world. In this paper we present two distinct frameworks for avatar animation through user natural motion input. We specifically target the full body avatar control case using a Kinect sensor via a simple, networked skeletal joint retargeting pipeline, as well as an intuitive user facial animation 3D reconstruction pipeline for rendering highly realistic user facial puppets. Furthermore, we present a common networked architecture to enable multiple remote clients to capture and render any number of 3D animated characters within a shared virtual environment

    Application-driven visual computing towards industry 4.0 2018

    Get PDF
    245 p.La Tesis recoge contribuciones en tres campos: 1. Agentes Virtuales Interactivos: autónomos, modulares, escalables, ubicuos y atractivos para el usuario. Estos IVA pueden interactuar con los usuarios de manera natural.2. Entornos de RV/RA Inmersivos: RV en la planificación de la producción, el diseño de producto, la simulación de procesos, pruebas y verificación. El Operario Virtual muestra cómo la RV y los Co-bots pueden trabajar en un entorno seguro. En el Operario Aumentado la RA muestra información relevante al trabajador de una manera no intrusiva. 3. Gestión Interactiva de Modelos 3D: gestión online y visualización de modelos CAD multimedia, mediante conversión automática de modelos CAD a la Web. La tecnología Web3D permite la visualización e interacción de estos modelos en dispositivos móviles de baja potencia.Además, estas contribuciones han permitido analizar los desafíos presentados por Industry 4.0. La tesis ha contribuido a proporcionar una prueba de concepto para algunos de esos desafíos: en factores humanos, simulación, visualización e integración de modelos

    3D Virtual Worlds and the Metaverse: Current Status and Future Possibilities

    Get PDF
    Moving from a set of independent virtual worlds to an integrated network of 3D virtual worlds or Metaverse rests on progress in four areas: immersive realism, ubiquity of access and identity, interoperability, and scalability. For each area, the current status and needed developments in order to achieve a functional Metaverse are described. Factors that support the formation of a viable Metaverse, such as institutional and popular interest and ongoing improvements in hardware performance, and factors that constrain the achievement of this goal, including limits in computational methods and unrealized collaboration among virtual world stakeholders and developers, are also considered

    Comparing and Evaluating Real Time Character Engines for Virtual Environments

    Get PDF
    As animated characters increasingly become vital parts of virtual environments, then the engines that drive these characters increasingly become vital parts of virtual environment software. This paper gives an overview of the state of the art in character engines, and proposes a taxonomy of the features that are commonly found in them. This taxonomy can be used as a tool for comparison and evaluation of different engines. In order to demonstrate this we use it to compare three engines. The first is Cal3D, the most commonly used open source engine. We also introduce two engines created by the authors, Piavca and HALCA. The paper ends with a brief discussion of some other popular engines

    Walking with virtual humans : understanding human response to virtual humanoids' appearance and behaviour while navigating in immersive VR

    Get PDF
    In this thesis, we present a set of studies whose results have allowed us to analyze how to improve the realism, navigation, and behaviour of the avatars in an immersive virtual reality environment. In our simulations, participants must perform a series of tasks and we have analyzed perceptual and behavioural data. The results of the studies have allowed us to deduce what improvements are needed to be incorporated to the original simulations, in order to enhance the perception of realism, the navigation technique, the rendering of the avatars, their behaviour or their animations. The most reliable technique for simulating avatars’ behaviour in a virtual reality environment should be based on the study of how humans behave within the environment. For this purpose, it is necessary to build virtual environments where participants can navigate safely and comfortably with a proper metaphor and, if the environment is populated with avatars, simulate their behaviour accurately. All these aspects together will make the participants behave in a way that is closer to how they would behave in the real world. Besides, the integration of these concepts could provide an ideal platform to develop different types of applications with and without collaborative virtual reality such as emergency simulations, teaching, architecture, or designing. In the first contribution of this thesis, we carried out an experiment to study human decision making during an evacuation. We were interested to evaluate to what extent the behaviour of a virtual crowd can affect individuals' decisions. From the second contribution, in which we studied the perception of realism with bots and humans performing just locomotion or varied animations, we can conclude that the combination of having human-like avatars with animation variety can increase the overall realism of a crowd simulation, trajectories and animation. The preliminary study presented in the third contribution of this thesis showed that realistic rendering of the environment and the avatars do not appear to increase the perception of realism in the participants, which is consistent with works presented previously. The preliminary results in our walk-in-place contribution showed a seamless and natural transition between walk-in-place and normal walk. Our system provided a velocity mapping function that closely resembles natural walk. We observed through a pilot study that the system successfully reduces motion sickness and enhances immersion. Finally, the results of the contribution related to locomotion in collaborative virtual reality showed that animation synchronism and footstep sound of the avatars representing the participants do not seem to have a strong impact in terms of presence and feeling of avatar control. However, in our experiment, incorporating natural animations and footstep sound resulted in smaller clearance values in VR than previous work in the literature. The main objective of this thesis was to improve different factors related to virtual reality experiences to make the participants feel more comfortable in the virtual environment. These factors include the behaviour and appearance of the virtual avatars and the navigation through the simulated space in the experience. By increasing the realism of the avatars and facilitating navigation, high scores in presence are achieved during the simulations. This provides an ideal framework for developing collaborative virtual reality applications or emergency simulations that require participants to feel as if they were in real life.En aquesta tesi, es presenta un conjunt d'estudis els resultats dels quals ens han permès analitzar com millorar el realisme, la navegació i el comportament dels avatars en un entorn de realitat virtual immersiu. En les nostres simulacions, els participants han de realitzar una sèrie de tasques i hem analitzat dades perceptives i de comportament mentre les feien. Els resultats dels estudis ens han permès deduir quines millores són necessàries per a ser incorporades a les simulacions originals, amb la finalitat de millorar la percepció del realisme, la tècnica de navegació, la representació dels avatars, el seu comportament o les seves animacions. La tècnica més fiable per simular el comportament dels avatars en un entorn de realitat virtual hauria de basar-se en l'estudi de com es comporten els humans dins de l¿entorn virtual. Per a aquest propòsit, és necessari construir entorns virtuals on els participants poden navegar amb seguretat i comoditat amb una metàfora adequada i, si l¿entorn està poblat amb avatars, simular el seu comportament amb precisió. Tots aquests aspectes junts fan que els participants es comportin d'una manera més pròxima a com es comportarien en el món real. A més, la integració d'aquests conceptes podria proporcionar una plataforma ideal per desenvolupar diferents tipus d'aplicacions amb i sense realitat virtual col·laborativa com simulacions d'emergència, ensenyament, arquitectura o disseny. En la primera contribució d'aquesta tesi, vam realitzar un experiment per estudiar la presa de decisions durant una evacuació. Estàvem interessats a avaluar en quina mesura el comportament d'una multitud virtual pot afectar les decisions dels participants. A partir de la segona contribució, en la qual estudiem la percepció del realisme amb robots i humans que realitzen només una animació de caminar o bé realitzen diverses animacions, vam arribar a la conclusió que la combinació de tenir avatars semblants als humans amb animacions variades pot augmentar la percepció del realisme general de la simulació de la multitud, les seves trajectòries i animacions. L'estudi preliminar presentat en la tercera contribució d'aquesta tesi va demostrar que la representació realista de l¿entorn i dels avatars no semblen augmentar la percepció del realisme en els participants, que és coherent amb treballs presentats anteriorment. Els resultats preliminars de la nostra contribució de walk-in-place van mostrar una transició suau i natural entre les metàfores de walk-in-place i caminar normal. El nostre sistema va proporcionar una funció de mapatge de velocitat que s'assembla molt al caminar natural. Hem observat a través d'un estudi pilot que el sistema redueix amb èxit el motion sickness i millora la immersió. Finalment, els resultats de la contribució relacionada amb locomoció en realitat virtual col·laborativa van mostrar que el sincronisme de l'animació i el so dels avatars que representen els participants no semblen tenir un fort impacte en termes de presència i sensació de control de l'avatar. No obstant això, en el nostre experiment, la incorporació d'animacions naturals i el so de passos va donar lloc a valors de clearance més petits en RV que treballs anteriors ja publicats. L'objectiu principal d'aquesta tesi ha estat millorar els diferents factors relacionats amb experiències de realitat virtual immersiva per fer que els participants se sentin més còmodes en l'entorn virtual. Aquests factors inclouen el comportament i l'aparença dels avatars i la navegació a través de l'entorn virtual. En augmentar el realisme dels avatars i facilitar la navegació, s'aconsegueixen altes puntuacions en presència durant les simulacions. Això proporciona un marc ideal per desenvolupar aplicacions col·laboratives de realitat virtual o simulacions d'emergència que requereixen que els participants se sentin com si estiguessin en la vida realPostprint (published version

    Comparing technologies for conveying emotions through realistic avatars in virtual reality-based metaverse experiences

    Get PDF
    With the development of metaverse(s), industry and academia are searching for the best ways to represent users' avatars in shared Virtual Environments (VEs), where real-time communication between users is required. The expressiveness of avatars is crucial for transmitting emotions that are key for social presence and user experience, and are conveyed via verbal and non-verbal facial and body signals. In this paper, two real-time modalities for conveying expressions in Virtual Reality (VR) via realistic, full-body avatars are compared by means of a user study. The first modality uses dedicated hardware (i.e., eye and facial trackers) to allow a mapping between the user’s facial expressions/eye movements and the avatar model. The second modality relies on an algorithm that, starting from an audio clip, approximates the facial motion by generating plausible lip and eye movements. The participants were requested to observe, for both the modalities, the avatar of an actor performing six scenes involving as many basic emotions. The evaluation considered mainly social presence and emotion conveyance. Results showed a clear superiority of facial tracking when compared to lip sync in conveying sadness and disgust. The same was less evident for happiness and fear. No differences were observed for anger and surprise

    To Affinity and Beyond: Interactive Digital Humans as a Human Computer Interface

    Get PDF
    The field of human computer interaction is increasingly exploring the use of more natural, human-like user interfaces to build intelligent agents to aid in everyday life. This is coupled with a move to people using ever more realistic avatars to represent themselves in their digital lives. As the ability to produce emotionally engaging digital human representations is only just now becoming technically possible, there is little research into how to approach such tasks. This is due to both technical complexity and operational implementation cost. This is now changing as we are at a nexus point with new approaches, faster graphics processing and enabling new technologies in machine learning and computer vision becoming available. I articulate the issues required for such digital humans to be considered successfully located on the other side of the phenomenon known as the Uncanny Valley. My results show that a complex mix of perceived and contextual aspects affect the sense making on digital humans and highlights previously undocumented effects of interactivity on the affinity. Users are willing to accept digital humans as a new form of user interface and they react to them emotionally in previously unanticipated ways. My research shows that it is possible to build an effective interactive digital human that crosses the Uncanny Valley. I directly explore what is required to build a visually realistic digital human as a primary research question and I explore if such a realistic face provides sufficient benefit to justify the challenges involved in building it. I conducted a Delphi study to inform the research approaches and then produced a complex digital human character based on these insights. This interactive and realistic digital human avatar represents a major technical undertaking involving multiple teams around the world. Finally, I explored a framework for examining the ethical implications and signpost future research areas

    Expressiveness of real-time motion captured avatars influences perceived animation realism and perceived quality of social interaction in virtual reality

    Get PDF
    Using motion capture to enhance the realism of social interaction in virtual reality (VR) is growing in popularity. However, the impact of different levels of avatar expressiveness on the user experience is not well understood. In the present study we manipulated levels of face and body expressiveness of avatars while investigating participant perceptions of animation realism and interaction quality when disclosing positive and negative experiences in VR. Moderate positive associations were observed between perceptions of animation realism and interaction quality. Post-experiment questions revealed that many of our participants (approximately 40 %) indicated the avatar with the highest face and body expressiveness as having the most realistic face and body expressions. The same proportion also indicated the avatar with the highest face and body expressiveness as being the most comforting and enjoyable avatar to interact with. Our results suggest that higher levels of face and body expressiveness are important for enhancing perceptions of realism and interaction quality within a social interaction in VR using motion capture

    Machinima And Video-based Soft Skills Training

    Get PDF
    Multimedia training methods have traditionally relied heavily on video based technologies and significant research has shown these to be very effective training tools. However production of video is time and resource intensive. Machinima (pronounced \u27muh-sheen-eh-mah\u27) technologies are based on video gaming technology. Machinima technology allows video game technology to be manipulated into unique scenarios based on entertainment or training and practice applications. Machinima is the converting of these unique scenarios into video vignettes that tell a story. These vignettes can be interconnected with branching points in much the same way that education videos are interconnected as vignettes between decision points. This study addressed the effectiveness of machinima based soft-skills education using avatar actors versus the traditional video teaching application using human actors. This research also investigated the difference between presence reactions when using avatar actor produced video vignettes as compared to human actor produced video vignettes. Results indicated that the difference in training and/or practice effectiveness is statistically insignificant for presence, interactivity, quality and the skill of assertiveness. The skill of active listening presented a mixed result indicating the need for careful attention to detail in situations where body language and facial expressions are critical to communication. This study demonstrates that a significant opportunity exists for the exploitation of avatar actors in video based instruction

    Interactions in Virtual Worlds:Proceedings Twente Workshop on Language Technology 15

    Get PDF
    corecore