644 research outputs found
Animation Fidelity in Self-Avatars: Impact on User Performance and Sense of Agency
The use of self-avatars is gaining popularity thanks to affordable VR
headsets. Unfortunately, mainstream VR devices often use a small number of
trackers and provide low-accuracy animations. Previous studies have shown that
the Sense of Embodiment, and in particular the Sense of Agency, depends on the
extent to which the avatar's movements mimic the user's movements. However, few
works study such effect for tasks requiring a precise interaction with the
environment, i.e., tasks that require accurate manipulation, precise foot
stepping, or correct body poses. In these cases, users are likely to notice
inconsistencies between their self-avatars and their actual pose. In this
paper, we study the impact of the animation fidelity of the user avatar on a
variety of tasks that focus on arm movement, leg movement and body posture. We
compare three different animation techniques: two of them using Inverse
Kinematics to reconstruct the pose from sparse input (6 trackers), and a third
one using a professional motion capture system with 17 inertial sensors. We
evaluate these animation techniques both quantitatively (completion time,
unintentional collisions, pose accuracy) and qualitatively (Sense of
Embodiment). Our results show that the animation quality affects the Sense of
Embodiment. Inertial-based MoCap performs significantly better in mimicking
body poses. Surprisingly, IK-based solutions using fewer sensors outperformed
MoCap in tasks requiring accurate positioning, which we attribute to the higher
latency and the positional drift that causes errors at the end-effectors, which
are more noticeable in contact areas such as the feet.Comment: Accepted in IEEE VR 202
Animation fidelity in self-avatars: impact on user performance and sense of agency
The use of self-avatars is gaining popularity thanks to affordable VR headsets. Unfortunately, mainstream VR devices often use a small number of trackers and provide low-accuracy animations. Previous studies have shown that the Sense of Embodiment, and in particular the Sense of Agency, depends on the extent to which the avatar's movements mimic the user's movements. However, few works study such effect for tasks requiring a precise interaction with the environment, i.e., tasks that require accurate manipulation, precise foot stepping, or correct body poses. In these cases, users are likely to notice inconsistencies between their self-avatars and their actual pose. In this paper, we study the impact of the animation fidelity of the user avatar on a variety of tasks that focus on arm movement, leg movement and body posture. We compare three different animation techniques: two of them using Inverse Kinematics to reconstruct the pose from sparse input (6 trackers), and a third one using a professional motion capture system with 17 inertial sensors. We evaluate these animation techniques both quantitatively (completion time, unintentional collisions, pose accuracy) and qualitatively (Sense of Embodiment). Our results show that the animation quality affects the Sense of Embodiment. Inertial-based MoCap performs significantly better in mimicking body poses. Surprisingly, IK-based solutions using fewer sensors outperformed MoCap in tasks requiring accurate positioning, which we attribute to the higher latency and the positional drift that causes errors at the end-effectors, which are more noticeable in contact areas such as the feet.This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 860768 (CLIPE project) and from MCIN/AEI/10.13039/501100011033/FEDER, UE (PID2021-122136OB-C21). Jose Luis Ponton was also funded by the Spanish Ministry of Universities (FPU21/01927).Peer ReviewedPostprint (author's final draft
Flexible Virtual Reality System for Neurorehabilitation and Quality of Life Improvement
As life expectancy is mostly increasing, the incidence of many neurological
disorders is also constantly growing. For improving the physical functions
affected by a neurological disorder, rehabilitation procedures are mandatory,
and they must be performed regularly. Unfortunately, neurorehabilitation
procedures have disadvantages in terms of costs, accessibility and a lack of
therapists. This paper presents Immersive Neurorehabilitation Exercises Using
Virtual Reality (INREX-VR), our innovative immersive neurorehabilitation system
using virtual reality. The system is based on a thorough research methodology
and is able to capture real-time user movements and evaluate joint mobility for
both upper and lower limbs, record training sessions and save electromyography
data. The use of the first-person perspective increases immersion, and the
joint range of motion is calculated with the help of both the HTC Vive system
and inverse kinematics principles applied on skeleton rigs. Tutorial exercises
are demonstrated by a virtual therapist, as they were recorded with real-life
physicians, and sessions can be monitored and configured through tele-medicine.
Complex movements are practiced in gamified settings, encouraging
self-improvement and competition. Finally, we proposed a training plan and
preliminary tests which show promising results in terms of accuracy and user
feedback. As future developments, we plan to improve the system's accuracy and
investigate a wireless alternative based on neural networks.Comment: 47 pages, 20 figures, 17 tables (including annexes), part of the MDPI
Sesnsors "Special Issue Smart Sensors and Measurements Methods for Quality of
Life and Ambient Assisted Living
Optimization of human tracking systems in virtual reality based on a neural network approach
The problem of determining the optimal number and location of tracking points on the human body to ensure the
necessary accuracy of reconstruction of kinematic parameters of human movements in virtual space is considered.
Optimization of the human tracking system in virtual reality has been performed to reduce the amount of transmitted
information, computational load and cost of motion capture systems by reducing the number of physical sensors. The
task of optimizing the number and location of tracking points on the human body necessary for the reconstruction of a
virtual body model from a limited set of input points using numerical approximation of the regression function is set.
An algorithm has been developed for collecting a large amount of data from a human body model in a virtual scene and
from a motion capture suit in the real world. The smallest number of human body tracking points and their location were
obtained using the proposed algorithm. Various neural network topologies have been trained and tested to approximate
the regression relationship between a vector of tracking points limited in size (from 3 to 13) and a vector of 18 virtual
points used for the complete reconstruction of the human body model. The necessary accuracy of reconstruction of
kinematic parameters of human movements is provided at 5 and 7 input points. The proposed approach made it possible
to use 5 or 7 physical sensors to build a model of the human body and restore the kinetic parameters of its movements
in virtual reality. The approach can be applied to solving inverse kinematics problems in order to reduce the number
of physical sensors placed on the surface of the object under study, to simplify the processing and transmission of
information. By combining data from both the motion capture suit and the virtual avatar, the process of collecting
information has been significantly accelerated, the volume of the training sample has been expanded and various patterns
of user body movements have been modeled
Multimodal agents for cooperative interaction
2020 Fall.Includes bibliographical references.Embodied virtual agents offer the potential to interact with a computer in a more natural manner, similar to how we interact with other people. To reach this potential requires multimodal interaction, including both speech and gesture. This project builds on earlier work at Colorado State University and Brandeis University on just such a multimodal system, referred to as Diana. I designed and developed a new software architecture to directly address some of the difficulties of the earlier system, particularly with regard to asynchronous communication, e.g., interrupting the agent after it has begun to act. Various other enhancements were made to the agent systems, including the model itself, as well as speech recognition, speech synthesis, motor control, and gaze control. Further refactoring and new code were developed to achieve software engineering goals that are not outwardly visible, but no less important: decoupling, testability, improved networking, and independence from a particular agent model. This work, combined with the effort of others in the lab, has produced a "version 2'' Diana system that is well positioned to serve the lab's research needs in the future. In addition, in order to pursue new research opportunities related to developmental and intervention science, a "Faelyn Fox'' agent was developed. This is a different model, with a simplified cognitive architecture, and a system for defining an experimental protocol (for example, a toy-sorting task) based on Unity's visual state machine editor. This version too lays a solid foundation for future research
- …