36,471 research outputs found

    Dance-the-music : an educational platform for the modeling, recognition and audiovisual monitoring of dance steps using spatiotemporal motion templates

    Get PDF
    In this article, a computational platform is presented, entitled “Dance-the-Music”, that can be used in a dance educational context to explore and learn the basics of dance steps. By introducing a method based on spatiotemporal motion templates, the platform facilitates to train basic step models from sequentially repeated dance figures performed by a dance teacher. Movements are captured with an optical motion capture system. The teachers’ models can be visualized from a first-person perspective to instruct students how to perform the specific dance steps in the correct manner. Moreover, recognition algorithms-based on a template matching method can determine the quality of a student’s performance in real time by means of multimodal monitoring techniques. The results of an evaluation study suggest that the Dance-the-Music is effective in helping dance students to master the basics of dance figures

    Straddling the intersection

    Get PDF
    Music technology straddles the intersection between art and science and presents those who choose to work within its sphere with many practical challenges as well as creative possibilities. The paper focuses on four main areas: secondary education, higher education, practice and research and finally collaboration. The paper emphasises the importance of collaboration in tackling the challenges of interdisciplinarity and in influencing future technological developments

    Distributed Learning System Design: A New Approach and an Agenda for Future Research

    Get PDF
    This article presents a theoretical framework designed to guide distributed learning design, with the goal of enhancing the effectiveness of distributed learning systems. The authors begin with a review of the extant research on distributed learning design, and themes embedded in this literature are extracted and discussed to identify critical gaps that should be addressed by future work in this area. A conceptual framework that integrates instructional objectives, targeted competencies, instructional design considerations, and technological features is then developed to address the most pressing gaps in current research and practice. The rationale and logic underlying this framework is explicated. The framework is designed to help guide trainers and instructional designers through critical stages of the distributed learning system design process. In addition, it is intended to help researchers identify critical issues that should serve as the focus of future research efforts. Recommendations and future research directions are presented and discussed

    Virtual Reality Games for Motor Rehabilitation

    Get PDF
    This paper presents a fuzzy logic based method to track user satisfaction without the need for devices to monitor users physiological conditions. User satisfaction is the key to any product’s acceptance; computer applications and video games provide a unique opportunity to provide a tailored environment for each user to better suit their needs. We have implemented a non-adaptive fuzzy logic model of emotion, based on the emotional component of the Fuzzy Logic Adaptive Model of Emotion (FLAME) proposed by El-Nasr, to estimate player emotion in UnrealTournament 2004. In this paper we describe the implementation of this system and present the results of one of several play tests. Our research contradicts the current literature that suggests physiological measurements are needed. We show that it is possible to use a software only method to estimate user emotion

    An Introduction to 3D User Interface Design

    Get PDF
    3D user interface design is a critical component of any virtual environment (VE) application. In this paper, we present a broad overview of three-dimensional (3D) interaction and user interfaces. We discuss the effect of common VE hardware devices on user interaction, as well as interaction techniques for generic 3D tasks and the use of traditional two-dimensional interaction styles in 3D environments. We divide most user interaction tasks into three categories: navigation, selection/manipulation, and system control. Throughout the paper, our focus is on presenting not only the available techniques, but also practical guidelines for 3D interaction design and widely held myths. Finally, we briefly discuss two approaches to 3D interaction design, and some example applications with complex 3D interaction requirements. We also present an annotated online bibliography as a reference companion to this article

    Shopping Using Gesture-Driven Interaction

    Get PDF

    Text-based Editing of Talking-head Video

    No full text
    Editing talking-head video to change the speech content or to remove filler words is challenging. We propose a novel method to edit talking-head video based on its transcript to produce a realistic output video in which the dialogue of the speaker has been modified, while maintaining a seamless audio-visual flow (i.e. no jump cuts). Our method automatically annotates an input talking-head video with phonemes, visemes, 3D face pose and geometry, reflectance, expression and scene illumination per frame. To edit a video, the user has to only edit the transcript, and an optimization strategy then chooses segments of the input corpus as base material. The annotated parameters corresponding to the selected segments are seamlessly stitched together and used to produce an intermediate video representation in which the lower half of the face is rendered with a parametric face model. Finally, a recurrent video generation network transforms this representation to a photorealistic video that matches the edited transcript. We demonstrate a large variety of edits, such as the addition, removal, and alteration of words, as well as convincing language translation and full sentence synthesis

    Virtual reality therapy for Alzheimer’s disease with speech instruction and real-time neurofeedback system

    Full text link
    La maladie d'Alzheimer (MA) est une maladie cérébrale dégénérative qui entraîne une perte progressive de la mémoire, un déclin cognitif et une détérioration graduelle de la capacité d'une personne à faire face à la complexité et à l'exigence des tâches quotidiennes nécessaires pour vivre en autonomie dans notre société actuelle. Les traitements pharmacologiques actuels peuvent ralentir le processus de dégradation attribué à la maladie, mais ces traitements peuvent également provoquer certains effets secondaires indésirables. L'un des traitements non pharmacologiques qui peut soulager efficacement les symptômes est la thérapie assistée par l'animal (T.A.A.). Mais en raison de certaines limitations telles que le prix des animaux et des problèmes d'hygiène, des animaux virtuels sont utilisés dans ce domaine. Cependant, les animaux virtuels animés, la qualité d'image approximative et le mode d'interaction unidirectionnel des animaux qui attendent passivement les instructions de l’utilisateur, peuvent difficilement stimuler le retour émotionnel entre l'utilisateur et les animaux virtuels, ce qui affaiblit considérablement l'effet thérapeutique. Cette étude vise à explorer l'efficacité de l'utilisation d'animaux virtuels à la place d’animaux vivants et leur impact sur la réduction des émotions négatives chez le patient. Cet objectif a été gardé à l'esprit lors de la conception du projet Zoo Therapy, qui présente un environnement immersif d'animaux virtuels en 3D, où l'impact sur l'émotion du patient est mesuré en temps réel par électroencéphalographie (EEG). Les objets statiques et les animaux virtuels de Zoo Therapy sont tous présentés à l'aide de modèles 3D réels. Les mouvements des animaux, les sons et les systèmes de repérage spécialement développés prennent en charge le comportement interactif simulé des animaux virtuels. De plus, pour que l'expérience d'interaction de l'utilisateur soit plus réelle, Zoo Therapy propose un mécanisme de communication novateur qui met en œuvre une interaction bidirectionnelle homme-machine soutenue par 3 méthodes d'interaction : le menu sur les panneaux, les instructions vocales et le Neurofeedback. La manière la plus directe d'interagir avec l'environnement de réalité virtuelle (RV) est le menu sur les panneaux, c'est-à-dire une interaction en cliquant sur les boutons des panneaux par le contrôleur de RV. Cependant, il était difficile pour certains utilisateurs ayant la MA d'utiliser le contrôleur de RV. Pour accommoder ceux qui ne sont pas bien adaptés ou compatibles avec le contrôleur de RV, un système d'instructions vocales peut être utilisé comme interface. Ce système a été reçu positivement par les 5 participants qui l'ont essayé. Même si l'utilisateur choisit de ne pas interagir activement avec l'animal virtuel dans les deux méthodes ci-dessus, le système de Neurofeedback guidera l'animal pour qu'il interagisse activement avec l'utilisateur en fonction des émotions de ce dernier. Le système de Neurofeedback classique utilise un système de règles pour donner des instructions. Les limites de cette méthode sont la rigidité et l'impossibilité de prendre en compte la relation entre les différentes émotions du participant. Pour résoudre ces problèmes, ce mémoire présente une méthode basée sur l'apprentissage par renforcement (AR) qui donne des instructions à différentes personnes en fonction des différentes émotions. Dans l'expérience de simulation des données émotionnelles synthétiques de la MD, la méthode basée sur l’AR est plus sensible aux changements émotionnels que la méthode basée sur les règles et peut apprendre automatiquement des règles potentielles pour maximiser les émotions positives de l'utilisateur. En raison de l'épidémie de Covid-19, nous n'avons pas été en mesure de mener des expériences à grande échelle. Cependant, un projet de suivi a combiné la thérapie de RV Zoo avec la reconnaissance des gestes et a prouvé son efficacité en évaluant les valeurs d'émotion EEG des participants.Alzheimer’s disease (AD) is a degenerative brain disease that causes progressive memory loss, cognitive decline, and gradually impairs one’s ability to cope with the complexity and requirement of the daily routine tasks necessary to live in autonomy in our current society. Actual pharmacological treatments can slow down the degradation process attributed to the disease, but such treatments may also cause some undesirable side effects. One of the non-pharmacological treatments that can effectively relieve symptoms is animal-assisted treatment (AAT). But due to some limitations such as animal cost and hygiene issues, virtual animals are used in this field. However, the animated virtual animals, the rough picture quality presentation, and the one-direction interaction mode of animals passively waiting for the user's instructions can hardly stimulate the emotional feedback background between the user and the virtual animals, which greatly weakens the therapeutic effect. This study aims to explore the effectiveness of using virtual animals in place of their living counterpart and their impact on the reduction of negative emotions in the patient. This approach has been implemented in the Zoo Therapy project, which presents an immersive 3D virtual reality animal environment, where the impact on the patient’s emotion is measured in real-time by using electroencephalography (EEG). The static objects and virtual animals in Zoo Therapy are all presented using real 3D models. The specially developed animal movements, sounds, and pathfinding systems support the simulated interactive behavior of virtual animals. In addition, for the user's interaction experience to be more real, the innovation of this approach is also in its communication mechanism as it implements a bidirectional human-computer interaction supported by 3 interaction methods: Menu panel, Speech instruction, and Neurofeedback. The most straightforward way to interact with the VR environment is through Menu panel, i.e., interaction by clicking buttons on panels by the VR controller. However, it was difficult for some AD users to use the VR controller. To accommodate those who are not well suited or compatible with VR controllers, a speech instruction system can be used as an interface, which was received positively by the 5 participants who tried it. Even if the user chooses not to actively interact with the virtual animal in the above two methods, the Neurofeedback system will guide the animal to actively interact with the user according to the user's emotions. The mainstream Neurofeedback system has been using artificial rules to give instructions. The limitation of this method is inflexibility and cannot take into account the relationship between the various emotions of the participant. To solve these problems, this thesis presents a reinforcement learning (RL)-based method that gives instructions to different people based on multiple emotions accordingly. In the synthetic AD emotional data simulation experiment, the RL-based method is more sensitive to emotional changes than the rule-based method and can automatically learn potential rules to maximize the user's positive emotions. Due to the Covid-19 epidemic, we were unable to conduct large-scale experiments. However, a follow-up project combined VR Zoo Therapy with gesture recognition and proved the effectiveness by evaluating participant's EEG emotion values
    • …
    corecore