3,522 research outputs found

    Further dimensions: text, typography and play in the metaverse

    Get PDF
    In this text I wish to delve into the creation of textual content as well as its visualization through typographic design mechanisms inside three dimensional virtual worlds, which are known as the metaverse. I am particularly focused upon the way in which such virtually three dimensional environments may place the usage of text within a context that stands in contradiction to its traditional one by creating an unexpected novel purpose which takes a marked departure from the intrinsic attribute with which text has inherently been associated – namely the attribute of readability. In such environments readability, or indeed even legibility, may often be displaced through the usage of text and typography as a playful device, as artifacts which may manifest in puzzle-like configurations, or as visual structures the contents of which are meant to be understood through means other than straightforward reading; thus bringing about states of heightened engagement, wonder and ‘play’ through their manipulation or indeed simply by being immersed within the spaces which are brought about through their very agency. I also wish to expand upon this subject by talking about my own experiments with this material and will conclude by positing that further virtual dimensions can be instrumental in eliciting exciting alternative usages of text and typography which bring to the fore the allographic properties of text as an artistic/creative expressive media that may well bear further scrutiny and exploration

    Animation Fidelity in Self-Avatars: Impact on User Performance and Sense of Agency

    Full text link
    The use of self-avatars is gaining popularity thanks to affordable VR headsets. Unfortunately, mainstream VR devices often use a small number of trackers and provide low-accuracy animations. Previous studies have shown that the Sense of Embodiment, and in particular the Sense of Agency, depends on the extent to which the avatar's movements mimic the user's movements. However, few works study such effect for tasks requiring a precise interaction with the environment, i.e., tasks that require accurate manipulation, precise foot stepping, or correct body poses. In these cases, users are likely to notice inconsistencies between their self-avatars and their actual pose. In this paper, we study the impact of the animation fidelity of the user avatar on a variety of tasks that focus on arm movement, leg movement and body posture. We compare three different animation techniques: two of them using Inverse Kinematics to reconstruct the pose from sparse input (6 trackers), and a third one using a professional motion capture system with 17 inertial sensors. We evaluate these animation techniques both quantitatively (completion time, unintentional collisions, pose accuracy) and qualitatively (Sense of Embodiment). Our results show that the animation quality affects the Sense of Embodiment. Inertial-based MoCap performs significantly better in mimicking body poses. Surprisingly, IK-based solutions using fewer sensors outperformed MoCap in tasks requiring accurate positioning, which we attribute to the higher latency and the positional drift that causes errors at the end-effectors, which are more noticeable in contact areas such as the feet.Comment: Accepted in IEEE VR 202

    Animation fidelity in self-avatars: impact on user performance and sense of agency

    Get PDF
    The use of self-avatars is gaining popularity thanks to affordable VR headsets. Unfortunately, mainstream VR devices often use a small number of trackers and provide low-accuracy animations. Previous studies have shown that the Sense of Embodiment, and in particular the Sense of Agency, depends on the extent to which the avatar's movements mimic the user's movements. However, few works study such effect for tasks requiring a precise interaction with the environment, i.e., tasks that require accurate manipulation, precise foot stepping, or correct body poses. In these cases, users are likely to notice inconsistencies between their self-avatars and their actual pose. In this paper, we study the impact of the animation fidelity of the user avatar on a variety of tasks that focus on arm movement, leg movement and body posture. We compare three different animation techniques: two of them using Inverse Kinematics to reconstruct the pose from sparse input (6 trackers), and a third one using a professional motion capture system with 17 inertial sensors. We evaluate these animation techniques both quantitatively (completion time, unintentional collisions, pose accuracy) and qualitatively (Sense of Embodiment). Our results show that the animation quality affects the Sense of Embodiment. Inertial-based MoCap performs significantly better in mimicking body poses. Surprisingly, IK-based solutions using fewer sensors outperformed MoCap in tasks requiring accurate positioning, which we attribute to the higher latency and the positional drift that causes errors at the end-effectors, which are more noticeable in contact areas such as the feet.This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 860768 (CLIPE project) and from MCIN/AEI/10.13039/501100011033/FEDER, UE (PID2021-122136OB-C21). Jose Luis Ponton was also funded by the Spanish Ministry of Universities (FPU21/01927).Peer ReviewedPostprint (author's final draft

    Fitted avatars: automatic skeleton adjustment for self-avatars in virtual reality

    Get PDF
    In the era of the metaverse, self-avatars are gaining popularity, as they can enhance presence and provide embodiment when a user is immersed in Virtual Reality. They are also very important in collaborative Virtual Reality to improve communication through gestures. Whether we are using a complex motion capture solution or a few trackers with inverse kinematics (IK), it is essential to have a good match in size between the avatar and the user, as otherwise mismatches in self-avatar posture could be noticeable for the user. To achieve such a correct match in dimensions, a manual process is often required, with the need for a second person to take measurements of body limbs and introduce them into the system. This process can be time-consuming, and prone to errors. In this paper, we propose an automatic measuring method that simply requires the user to do a small set of exercises while wearing a Head-Mounted Display (HMD), two hand controllers, and three trackers. Our work provides an affordable and quick method to automatically extract user measurements and adjust the virtual humanoid skeleton to the exact dimensions. Our results show that our method can reduce the misalignment produced by the IK system when compared to other solutions that simply apply a uniform scaling to an avatar based on the height of the HMD, and make assumptions about the locations of joints with respect to the trackers.This work was funded by the Spanish Ministry of Science and Innovation (PID2021-122136OB-C21). Jose Luis Ponton was also funded by the Spanish Ministry of Universities (FPU21/01927).Peer ReviewedPostprint (published version

    A Multi-Modal, Modified-Feedback and Self-Paced Brain-Computer Interface (BCI) to Control an Embodied Avatar's Gait

    Full text link
    Brain-computer interfaces (BCI) have been used to control the gait of a virtual self-avatar with the aim of being used in gait rehabilitation. A BCI decodes the brain signals representing a desire to do something and transforms them into a control command for controlling external devices. The feelings described by the participants when they control a self-avatar in an immersive virtual environment (VE) demonstrate that humans can be embodied in the surrogate body of an avatar (ownership illusion). It has recently been shown that inducing the ownership illusion and then manipulating the movements of one’s self-avatar can lead to compensatory motor control strategies. In order to maximize this effect, there is a need for a method that measures and monitors embodiment levels of participants immersed in virtual reality (VR) to induce and maintain a strong ownership illusion. This is particularly true given that reaching a high level of both BCI performance and embodiment are inter-connected. To reach one of them, the second must be reached as well. Some limitations of many existing systems hinder their adoption for neurorehabilitation: 1- some use motor imagery (MI) of movements other than gait; 2- most systems allow the user to take single steps or to walk but do not allow both, which prevents users from progressing from steps to gait; 3- most of them function in a single BCI mode (cue-paced or self-paced), which prevents users from progressing from machine-dependent to machine-independent walking. Overcoming the aforementioned limitations can be done by combining different control modes and options in one single system. However, this would have a negative impact on BCI performance, therefore diminishing its usefulness as a potential rehabilitation tool. In this case, there will be a need to enhance BCI performance. For such purpose, many techniques have been used in the literature, such as providing modified feedback (whereby the presented feedback is not consistent with the user’s MI), sequential training (recalibrating the classifier as more data becomes available). This thesis was developed over 3 studies. The objective in study 1 was to investigate the possibility of measuring the level of embodiment of an immersive self-avatar, during the performing, observing and imagining of gait, using electroencephalogram (EEG) techniques, by presenting visual feedback that conflicts with the desired movement of embodied participants. The objective of study 2 was to develop and validate a BCI to control single steps and forward walking of an immersive virtual reality (VR) self-avatar, using mental imagery of these actions, in cue-paced and self-paced modes. Different performance enhancement strategies were implemented to increase BCI performance. The data of these two studies were then used in study 3 to construct a generic classifier that could eliminate offline calibration for future users and shorten training time. Twenty different healthy participants took part in studies 1 and 2. In study 1, participants wore an EEG cap and motion capture markers, with an avatar displayed in a head-mounted display (HMD) from a first-person perspective (1PP). They were cued to either perform, watch or imagine a single step forward or to initiate walking on a treadmill. For some of the trials, the avatar took a step with the contralateral limb or stopped walking before the participant stopped (modified feedback). In study 2, participants completed a 4-day sequential training to control the gait of an avatar in both BCI modes. In cue-paced mode, they were cued to imagine a single step forward, using their right or left foot, or to walk forward. In the self-paced mode, they were instructed to reach a target using the MI of multiple steps (switch control mode) or maintaining the MI of forward walking (continuous control mode). The avatar moved as a response to two calibrated regularized linear discriminant analysis (RLDA) classifiers that used the μ power spectral density (PSD) over the foot area of the motor cortex as features. The classifiers were retrained after every session. During the training, and for some of the trials, positive modified feedback was presented to half of the participants, where the avatar moved correctly regardless of the participant’s real performance. In both studies, the participants’ subjective experience was analyzed using a questionnaire. Results of study 1 show that subjective levels of embodiment correlate strongly with the power differences of the event-related synchronization (ERS) within the μ frequency band, and over the motor and pre-motor cortices between the modified and regular feedback trials. Results of study 2 show that all participants were able to operate the cued-paced BCI and the selfpaced BCI in both modes. For the cue-paced BCI, the average offline performance (classification rate) on day 1 was 67±6.1% and 86±6.1% on day 3, showing that the recalibration of the classifiers enhanced the offline performance of the BCI (p < 0.01). The average online performance was 85.9±8.4% for the modified feedback group (77-97%) versus 75% for the non-modified feedback group. For self-paced BCI, the average performance was 83% at switch control and 92% at continuous control mode, with a maximum of 12 seconds of control. Modified feedback enhanced BCI performances (p =0.001). Finally, results of study 3 show that the constructed generic models performed as well as models obtained from participant-specific offline data. The results show that there it is possible to design a participant-independent zero-training BCI.Les interfaces cerveau-ordinateur (ICO) ont été utilisées pour contrôler la marche d'un égo-avatar virtuel dans le but d'être utilisées dans la réadaptation de la marche. Une ICO décode les signaux du cerveau représentant un désir de faire produire un mouvement et les transforme en une commande de contrôle pour contrôler des appareils externes. Les sentiments décrits par les participants lorsqu'ils contrôlent un égo-avatar dans un environnement virtuel immersif démontrent que les humains peuvent être incarnés dans un corps d'un avatar (illusion de propriété). Il a été récemment démontré que provoquer l’illusion de propriété puis manipuler les mouvements de l’égo-avatar peut conduire à des stratégies de contrôle moteur compensatoire. Afin de maximiser cet effet, il existe un besoin d'une méthode qui mesure et surveille les niveaux d’incarnation des participants immergés dans la réalité virtuelle (RV) pour induire et maintenir une forte illusion de propriété. D'autre part, atteindre un niveau élevé de performances (taux de classification) ICO et d’incarnation est interconnecté. Pour atteindre l'un d'eux, le second doit également être atteint. Certaines limitations de plusieurs de ces systèmes entravent leur adoption pour la neuroréhabilitation: 1- certains utilisent l'imagerie motrice (IM) des mouvements autres que la marche; 2- la plupart des systèmes permettent à l'utilisateur de faire des pas simples ou de marcher mais pas les deux, ce qui ne permet pas à un utilisateur de passer des pas à la marche; 3- la plupart fonctionnent en un seul mode d’ICO, rythmé (cue-paced) ou auto-rythmé (self-paced). Surmonter les limitations susmentionnées peut être fait en combinant différents modes et options de commande dans un seul système. Cependant, cela aurait un impact négatif sur les performances de l’ICO, diminuant ainsi son utilité en tant qu'outil potentiel de réhabilitation. Dans ce cas, il sera nécessaire d'améliorer les performances des ICO. À cette fin, de nombreuses techniques ont été utilisées dans la littérature, telles que la rétroaction modifiée, le recalibrage du classificateur et l'utilisation d'un classificateur générique. Le projet de cette thèse a été réalisé en 3 études, avec objectif d'étudier dans l'étude 1, la possibilité de mesurer le niveau d'incarnation d'un égo-avatar immersif, lors de l'exécution, de l'observation et de l'imagination de la marche, à l'aide des techniques encéphalogramme (EEG), en présentant une rétroaction visuelle qui entre en conflit avec la commande du contrôle moteur des sujets incarnés. L'objectif de l'étude 2 était de développer un BCI pour contrôler les pas et la marche vers l’avant d'un égo-avatar dans la réalité virtuelle immersive, en utilisant l'imagerie motrice de ces actions, dans des modes rythmés et auto-rythmés. Différentes stratégies d'amélioration des performances ont été mises en œuvre pour augmenter la performance (taux de classification) de l’ICO. Les données de ces deux études ont ensuite été utilisées dans l'étude 3 pour construire des classificateurs génériques qui pourraient éliminer la calibration hors ligne pour les futurs utilisateurs et raccourcir le temps de formation. Vingt participants sains différents ont participé aux études 1 et 2. Dans l'étude 1, les participants portaient un casque EEG et des marqueurs de capture de mouvement, avec un avatar affiché dans un casque de RV du point de vue de la première personne (1PP). Ils ont été invités à performer, à regarder ou à imaginer un seul pas en avant ou la marche vers l’avant (pour quelques secondes) sur le tapis roulant. Pour certains essais, l'avatar a fait un pas avec le membre controlatéral ou a arrêté de marcher avant que le participant ne s'arrête (rétroaction modifiée). Dans l'étude 2, les participants ont participé à un entrainement séquentiel de 4 jours pour contrôler la marche d'un avatar dans les deux modes de l’ICO. En mode rythmé, ils ont imaginé un seul pas en avant, en utilisant leur pied droit ou gauche, ou la marche vers l’avant . En mode auto-rythmé, il leur a été demandé d'atteindre une cible en utilisant l'imagerie motrice (IM) de plusieurs pas (mode de contrôle intermittent) ou en maintenir l'IM de marche vers l’avant (mode de contrôle continu). L'avatar s'est déplacé en réponse à deux classificateurs ‘Regularized Linear Discriminant Analysis’ (RLDA) calibrés qui utilisaient comme caractéristiques la densité spectrale de puissance (Power Spectral Density; PSD) des bandes de fréquences µ (8-12 Hz) sur la zone du pied du cortex moteur. Les classificateurs ont été recalibrés après chaque session. Au cours de l’entrainement et pour certains des essais, une rétroaction modifiée positive a été présentée à la moitié des participants, où l'avatar s'est déplacé correctement quelle que soit la performance réelle du participant. Dans les deux études, l'expérience subjective des participants a été analysée à l'aide d'un questionnaire. Les résultats de l'étude 1 montrent que les niveaux subjectifs d’incarnation sont fortement corrélés à la différence de la puissance de la synchronisation liée à l’événement (Event-Related Synchronization; ERS) sur la bande de fréquence μ et sur le cortex moteur et prémoteur entre les essais de rétroaction modifiés et réguliers. L'étude 2 a montré que tous les participants étaient capables d’utiliser le BCI rythmé et auto-rythmé dans les deux modes. Pour le BCI rythmé, la performance hors ligne moyenne au jour 1 était de 67±6,1% et 86±6,1% au jour 3, ce qui montre que le recalibrage des classificateurs a amélioré la performance hors ligne du BCI (p <0,01). La performance en ligne moyenne était de 85,9±8,4% pour le groupe de rétroaction modifié (77-97%) contre 75% pour le groupe de rétroaction non modifié. Pour le BCI auto-rythmé, la performance moyenne était de 83% en commande de commutateur et de 92% en mode de commande continue, avec un maximum de 12 secondes de commande. Les performances de l’ICO ont été améliorées par la rétroaction modifiée (p = 0,001). Enfin, les résultats de l'étude 3 montrent que pour la classification des initialisations des pas et de la marche, il a été possible de construire des modèles génériques à partir de données hors ligne spécifiques aux participants. Les résultats montrent la possibilité de concevoir une ICO ne nécessitant aucun entraînement spécifique au participant

    The Rocketbox Library and the Utility of Freely Available Rigged Avatars

    Get PDF
    As part of the open sourcing of the Microsoft Rocketbox avatar library for research and academic purposes, here we discuss the importance of rigged avatars for the Virtual and Augmented Reality (VR, AR) research community. Avatars, virtual representations of humans, are widely used in VR applications. Furthermore many research areas ranging from crowd simulation to neuroscience, psychology, or sociology have used avatars to investigate new theories or to demonstrate how they influence human performance and interactions. We divide this paper in two main parts: the first one gives an overview of the different methods available to create and animate avatars. We cover the current main alternatives for face and body animation as well introduce upcoming capture methods. The second part presents the scientific evidence of the utility of using rigged avatars for embodiment but also for applications such as crowd simulation and entertainment. All in all this paper attempts to convey why rigged avatars will be key to the future of VR and its wide adoption

    All Hands on Deck: Choosing Virtual End Effector Representations to Improve Near Field Object Manipulation Interactions in Extended Reality

    Get PDF
    Extended reality, or XR , is the adopted umbrella term that is heavily gaining traction to collectively describe Virtual reality (VR), Augmented reality (AR), and Mixed reality (MR) technologies. Together, these technologies extend the reality that we experience either by creating a fully immersive experience like in VR or by blending in the virtual and real worlds like in AR and MR. The sustained success of XR in the workplace largely hinges on its ability to facilitate efficient user interactions. Similar to interacting with objects in the real world, users in XR typically interact with virtual integrants like objects, menus, windows, and information that convolve together to form the overall experience. Most of these interactions involve near-field object manipulation for which users are generally provisioned with visual representations of themselves also called self-avatars. Representations that involve only the distal entity are called end-effector representations and they shape how users perceive XR experiences. Through a series of investigations, this dissertation evaluates the effects of virtual end effector representations on near-field object retrieval interactions in XR settings. Through studies conducted in virtual, augmented, and mixed reality, implications about the virtual representation of end-effectors are discussed, and inferences are made for the future of near-field interaction in XR to draw upon from. This body of research aids technologists and designers by providing them with details that help in appropriately tailoring the right end effector representation to improve near-field interactions, thereby collectively establishing knowledge that epitomizes the future of interactions in XR

    Losing control in social situations: How the presence of others affects neural processes related to sense of agency

    Get PDF
    Social contexts substantially influence individual behavior, but little is known about how they affect cognitive processes related to voluntary action. Previously, it has been shown that social context reduces participants' sense of agency over the outcomes of their actions and outcome monitoring. In this fMRI study on human volunteers, we investigated the neural mechanisms by which social context alters sense of agency. Participants made costly actions to stop inflating a balloon before it burst. On "social" trials, another player could act in their stead, but we analyzed only trials in which the other player remained passive. We hypothesized that mentalizing processes during social trials would affect decision-making fluency and lead to a decreased sense of agency. In line with this hypothesis, we found increased activity in the bilateral temporo-parietal junction (TPJ), precuneus, and middle frontal gyrus during social trials compared with nonsocial trials. Activity in the precuneus was, in turn, negatively related to sense of agency at a single-trial level. We further found a double dissociation between TPJ and angular gyrus (AG): activity in the left AG was not sensitive to social context but was negatively related to sense of agency. In contrast, activity in the TPJ was modulated by social context but was not sensitive to sense of agency

    Seeking the Entanglement of Immersion and Emergence: Reflections from an Analysis of the State of IS Research on Virtual Worlds

    Get PDF
    This paper critically reviews the state of virtual world research within the Information Systems field; revealing areas of interest evident in research studies between 2007-2011, the methods employed to conduct such research, the theories/frameworks used to ground VW research, as well as reoccurring memes/concepts. We argue that virtual worlds are best interpreted as both an immersive and emergent co-creative process, ‘performed’ by users’ actions and interactions both with other users and with artifacts such as virtual goods. Nevertheless, our analysis reveals a near neglect of the substantive nature of digital materiality and of the emergent nature of virtual worlds. We conclude that this ‘human-centric’ stance has taken focus away from the unique nature of the virtual world artifact itself, and posit a research agenda that focuses on virtual world objects as well as the immersive and emergent activities of ‘world-builders’ as necessary to advance virtual world research

    Walking with virtual humans : understanding human response to virtual humanoids' appearance and behaviour while navigating in immersive VR

    Get PDF
    In this thesis, we present a set of studies whose results have allowed us to analyze how to improve the realism, navigation, and behaviour of the avatars in an immersive virtual reality environment. In our simulations, participants must perform a series of tasks and we have analyzed perceptual and behavioural data. The results of the studies have allowed us to deduce what improvements are needed to be incorporated to the original simulations, in order to enhance the perception of realism, the navigation technique, the rendering of the avatars, their behaviour or their animations. The most reliable technique for simulating avatars’ behaviour in a virtual reality environment should be based on the study of how humans behave within the environment. For this purpose, it is necessary to build virtual environments where participants can navigate safely and comfortably with a proper metaphor and, if the environment is populated with avatars, simulate their behaviour accurately. All these aspects together will make the participants behave in a way that is closer to how they would behave in the real world. Besides, the integration of these concepts could provide an ideal platform to develop different types of applications with and without collaborative virtual reality such as emergency simulations, teaching, architecture, or designing. In the first contribution of this thesis, we carried out an experiment to study human decision making during an evacuation. We were interested to evaluate to what extent the behaviour of a virtual crowd can affect individuals' decisions. From the second contribution, in which we studied the perception of realism with bots and humans performing just locomotion or varied animations, we can conclude that the combination of having human-like avatars with animation variety can increase the overall realism of a crowd simulation, trajectories and animation. The preliminary study presented in the third contribution of this thesis showed that realistic rendering of the environment and the avatars do not appear to increase the perception of realism in the participants, which is consistent with works presented previously. The preliminary results in our walk-in-place contribution showed a seamless and natural transition between walk-in-place and normal walk. Our system provided a velocity mapping function that closely resembles natural walk. We observed through a pilot study that the system successfully reduces motion sickness and enhances immersion. Finally, the results of the contribution related to locomotion in collaborative virtual reality showed that animation synchronism and footstep sound of the avatars representing the participants do not seem to have a strong impact in terms of presence and feeling of avatar control. However, in our experiment, incorporating natural animations and footstep sound resulted in smaller clearance values in VR than previous work in the literature. The main objective of this thesis was to improve different factors related to virtual reality experiences to make the participants feel more comfortable in the virtual environment. These factors include the behaviour and appearance of the virtual avatars and the navigation through the simulated space in the experience. By increasing the realism of the avatars and facilitating navigation, high scores in presence are achieved during the simulations. This provides an ideal framework for developing collaborative virtual reality applications or emergency simulations that require participants to feel as if they were in real life.En aquesta tesi, es presenta un conjunt d'estudis els resultats dels quals ens han permès analitzar com millorar el realisme, la navegació i el comportament dels avatars en un entorn de realitat virtual immersiu. En les nostres simulacions, els participants han de realitzar una sèrie de tasques i hem analitzat dades perceptives i de comportament mentre les feien. Els resultats dels estudis ens han permès deduir quines millores són necessàries per a ser incorporades a les simulacions originals, amb la finalitat de millorar la percepció del realisme, la tècnica de navegació, la representació dels avatars, el seu comportament o les seves animacions. La tècnica més fiable per simular el comportament dels avatars en un entorn de realitat virtual hauria de basar-se en l'estudi de com es comporten els humans dins de l¿entorn virtual. Per a aquest propòsit, és necessari construir entorns virtuals on els participants poden navegar amb seguretat i comoditat amb una metàfora adequada i, si l¿entorn està poblat amb avatars, simular el seu comportament amb precisió. Tots aquests aspectes junts fan que els participants es comportin d'una manera més pròxima a com es comportarien en el món real. A més, la integració d'aquests conceptes podria proporcionar una plataforma ideal per desenvolupar diferents tipus d'aplicacions amb i sense realitat virtual col·laborativa com simulacions d'emergència, ensenyament, arquitectura o disseny. En la primera contribució d'aquesta tesi, vam realitzar un experiment per estudiar la presa de decisions durant una evacuació. Estàvem interessats a avaluar en quina mesura el comportament d'una multitud virtual pot afectar les decisions dels participants. A partir de la segona contribució, en la qual estudiem la percepció del realisme amb robots i humans que realitzen només una animació de caminar o bé realitzen diverses animacions, vam arribar a la conclusió que la combinació de tenir avatars semblants als humans amb animacions variades pot augmentar la percepció del realisme general de la simulació de la multitud, les seves trajectòries i animacions. L'estudi preliminar presentat en la tercera contribució d'aquesta tesi va demostrar que la representació realista de l¿entorn i dels avatars no semblen augmentar la percepció del realisme en els participants, que és coherent amb treballs presentats anteriorment. Els resultats preliminars de la nostra contribució de walk-in-place van mostrar una transició suau i natural entre les metàfores de walk-in-place i caminar normal. El nostre sistema va proporcionar una funció de mapatge de velocitat que s'assembla molt al caminar natural. Hem observat a través d'un estudi pilot que el sistema redueix amb èxit el motion sickness i millora la immersió. Finalment, els resultats de la contribució relacionada amb locomoció en realitat virtual col·laborativa van mostrar que el sincronisme de l'animació i el so dels avatars que representen els participants no semblen tenir un fort impacte en termes de presència i sensació de control de l'avatar. No obstant això, en el nostre experiment, la incorporació d'animacions naturals i el so de passos va donar lloc a valors de clearance més petits en RV que treballs anteriors ja publicats. L'objectiu principal d'aquesta tesi ha estat millorar els diferents factors relacionats amb experiències de realitat virtual immersiva per fer que els participants se sentin més còmodes en l'entorn virtual. Aquests factors inclouen el comportament i l'aparença dels avatars i la navegació a través de l'entorn virtual. En augmentar el realisme dels avatars i facilitar la navegació, s'aconsegueixen altes puntuacions en presència durant les simulacions. Això proporciona un marc ideal per desenvolupar aplicacions col·laboratives de realitat virtual o simulacions d'emergència que requereixen que els participants se sentin com si estiguessin en la vida realPostprint (published version
    • …
    corecore