13,145 research outputs found

    Virtual Reality Games for Motor Rehabilitation

    Get PDF
    This paper presents a fuzzy logic based method to track user satisfaction without the need for devices to monitor users physiological conditions. User satisfaction is the key to any product’s acceptance; computer applications and video games provide a unique opportunity to provide a tailored environment for each user to better suit their needs. We have implemented a non-adaptive fuzzy logic model of emotion, based on the emotional component of the Fuzzy Logic Adaptive Model of Emotion (FLAME) proposed by El-Nasr, to estimate player emotion in UnrealTournament 2004. In this paper we describe the implementation of this system and present the results of one of several play tests. Our research contradicts the current literature that suggests physiological measurements are needed. We show that it is possible to use a software only method to estimate user emotion

    Towards Simulating Humans in Augmented Multi-party Interaction

    Get PDF
    Human-computer interaction requires modeling of the user. A user profile typically contains preferences, interests, characteristics, and interaction behavior. However, in its multimodal interaction with a smart environment the user displays characteristics that show how the user, not necessarily consciously, verbally and nonverbally provides the smart environment with useful input and feedback. Especially in ambient intelligence environments we encounter situations where the environment supports interaction between the environment, smart objects (e.g., mobile robots, smart furniture) and human participants in the environment. Therefore it is useful for the profile to contain a physical representation of the user obtained by multi-modal capturing techniques. We discuss the modeling and simulation of interacting participants in the European AMI research project

    Framework of controlling 3d virtual human emotional walking using BCI

    Get PDF
    A Brain-Computer Interface (BCI) is the device that can read and acquire the brain activities. A human body is controlled by Brain-Signals, which considered as a main controller. Furthermore, the human emotions and thoughts will be translated by brain through brain signals and expressed as human mood. This controlling process mainly performed through brain signals, the brain signals is a key component in electroencephalogram (EEG). Based on signal processing the features representing human mood (behavior) could be extracted with emotion as a major feature. This paper proposes a new framework in order to recognize the human inner emotions that have been conducted on the basis of EEG signals using a BCI device controller. This framework go through five steps starting by classifying the brain signal after reading it in order to obtain the emotion, then map the emotion, synchronize the animation of the 3D virtual human, test and evaluate the work. Based on our best knowledge there is no framework for controlling the 3D virtual human. As a result for implementing our framework will enhance the game field of enhancing and controlling the 3D virtual humans’ emotion walking in order to enhance and bring more realistic as well. Commercial games and Augmented Reality systems are possible beneficiaries of this technique. © 2015 Penerbit UTM Press. All rights reserved

    Virtual environments promoting interaction

    Get PDF
    Virtual reality (VR) has been widely researched in the academic environment and is now breaking into the industry. Regular companies do not have access to this technology as a collaboration tool because these solutions usually require specific devices that are not at hand of the common user in offices. There are other collaboration platforms based on video, speech and text, but VR allows users to share the same 3D space. In this 3D space there can be added functionalities or information that in a real-world environment would not be possible, something intrinsic to VR. This dissertation has produced a 3D framework that promotes nonverbal communication. It plays a fundamental role on human interaction and is mostly based on emotion. In the academia, confusion is known to influence learning gains if it is properly managed. We designed a study to evaluate how lexical, syntactic and n-gram features influence perceived confusion and found results (not statistically significant) that point that it is possible to build a machine learning model that can predict the level of confusion based on these features. This model was used to manipulate the script of a given presentation, and user feedback shows a trend that by manipulating these features and theoretically lowering the level of confusion on text not only drops the reported confusion, as it also increases reported sense of presence. Another contribution of this dissertation comes from the intrinsic features of a 3D environment where one can carry actions that in a real world are not possible. We designed an automatic adaption lighting system that reacts to the perceived user’s engagement. This hypothesis was partially refused as the results go against what we hypothesized but do not have statistical significance. Three lines of research may stem from this dissertation. First, there can be more complex features to train the machine learning model such as syntax trees. Also, on an Intelligent Tutoring System this could adjust the avatar’s speech in real-time if fed by a real-time confusion detector. When going for a social scenario, the set of basic emotions is well-adjusted and can enrich them. Facial emotion recognition can extend this effect to the avatar’s body to fuel this synchronization and increase the sense of presence. Finally, we based this dissertation on the premise of using ubiquitous devices, but with the rapid evolution of technology we should consider that new devices will be present on offices. This opens new possibilities for other modalities.A Realidade Virtual (RV) tem sido alvo de investigação extensa na academia e tem vindo a entrar na indĂșstria. Empresas comuns nĂŁo tĂȘm acesso a esta tecnologia como uma ferramenta de colaboração porque estas soluçÔes necessitam de dispositivos especĂ­ficos que nĂŁo estĂŁo disponĂ­veis para o utilizador comum em escritĂłrio. Existem outras plataformas de colaboração baseadas em vĂ­deo, voz e texto, mas a RV permite partilhar o mesmo espaço 3D. Neste espaço podem existir funcionalidades ou informação adicionais que no mundo real nĂŁo seria possĂ­vel, algo intrĂ­nseco Ă  RV. Esta dissertação produziu uma framework 3D que promove a comunicação nĂŁo-verbal que tem um papel fundamental na interação humana e Ă© principalmente baseada em emoção. Na academia Ă© sabido que a confusĂŁo influencia os ganhos na aprendizagem quando gerida adequadamente. DesenhĂĄmos um estudo para avaliar como as caracterĂ­sticas lexicais, sintĂĄticas e n-gramas influenciam a confusĂŁo percecionada. ConstruĂ­mos e testĂĄmos um modelo de aprendizagem automĂĄtica que prevĂȘ o nĂ­vel de confusĂŁo baseado nestas caracterĂ­sticas, produzindo resultados nĂŁo estatisticamente significativos que suportam esta hipĂłtese. Este modelo foi usado para manipular o texto de uma apresentação e o feedback dos utilizadores demonstra uma tendĂȘncia na diminuição do nĂ­vel de confusĂŁo reportada no texto e aumento da sensação de presença. Outra contribuição vem das caracterĂ­sticas intrĂ­nsecas de um ambiente 3D onde se podem executar açÔes que no mundo real nĂŁo seriam possĂ­veis. DesenhĂĄmos um sistema automĂĄtico de iluminação adaptativa que reage ao engagement percecionado do utilizador. Os resultados nĂŁo suportam o que hipotetizĂĄmos mas nĂŁo tĂȘm significĂąncia estatĂ­stica, pelo que esta hipĂłtese foi parcialmente rejeitada. TrĂȘs linhas de investigação podem provir desta dissertação. Primeiro, criar caracterĂ­sticas mais complexas para treinar o modelo de aprendizagem, tais como ĂĄrvores de sintaxe. AlĂ©m disso, num Intelligent Tutoring System este modelo poderĂĄ ajustar o discurso do avatar em tempo real, alimentado por um detetor de confusĂŁo. As emoçÔes bĂĄsicas ajustam-se a um cenĂĄrio social e podem enriquecĂȘ-lo. A emoção expressada facialmente pode estender este efeito ao corpo do avatar para alimentar o sincronismo social e aumentar a sensação de presença. Finalmente, baseĂĄmo-nos em dispositivos ubĂ­quos, mas com a rĂĄpida evolução da tecnologia, podemos considerar que novos dispositivos irĂŁo estar presentes em escritĂłrios. Isto abre possibilidades para novas modalidades

    An aesthetics of touch: investigating the language of design relating to form

    Get PDF
    How well can designers communicate qualities of touch? This paper presents evidence that they have some capability to do so, much of which appears to have been learned, but at present make limited use of such language. Interviews with graduate designer-makers suggest that they are aware of and value the importance of touch and materiality in their work, but lack a vocabulary to fully relate to their detailed explanations of other aspects such as their intent or selection of materials. We believe that more attention should be paid to the verbal dialogue that happens in the design process, particularly as other researchers show that even making-based learning also has a strong verbal element to it. However, verbal language alone does not appear to be adequate for a comprehensive language of touch. Graduate designers-makers’ descriptive practices combined non-verbal manipulation within verbal accounts. We thus argue that haptic vocabularies do not simply describe material qualities, but rather are situated competences that physically demonstrate the presence of haptic qualities. Such competencies are more important than groups of verbal vocabularies in isolation. Design support for developing and extending haptic competences must take this wide range of considerations into account to comprehensively improve designers’ capabilities

    Towards higher sense of presence: a 3D virtual environment adaptable to confusion and engagement

    Get PDF
    Virtual Reality scenarios where emitters convey information to receptors can be used as a tool for distance learning and to enable virtual visits to company physical headquarters. However, immersive Virtual Reality setups usually require visualization interfaces such as Head-mounted Displays, Powerwalls or CAVE systems, supported by interaction devices (Microsoft Kinect, Wii Motion, among others), that foster natural interaction but are often inaccessible to users. We propose a virtual presentation scenario, supported by a framework, that provides emotion-driven interaction through ubiquitous devices. An experiment with 3 conditions was designed involving: a control condition; a less confusing text script based on its lexical, syntactical, and bigram features; and a third condition where an adaptive lighting system dynamically acted based on the user’s engagement. Results show that users exposed to the less confusing script reported higher sense of presence, albeit without statistical significance. Users from the last condition reported lower sense of presence, which rejects our hypothesis without statistical significance. We theorize that, as the presentation was given orally and the adaptive lighting system impacts the visual channel, this conflict may have overloaded the users’ cognitive capacity and thus reduced available resources to address the presentation content.info:eu-repo/semantics/publishedVersio

    Facial and Bodily Expressions for Control and Adaptation of Games (ECAG 2008)

    Get PDF
    • 

    corecore