1,051 research outputs found

    Autonomous Motion Planning for Avatar Limbs

    Get PDF
    In this work, a new algorithm for autonomous avatar motion is presented. The new algorithm is based in the Rapidly-exploring Random Tree (RRT) and an appropriate ontology. It uses a novel approach for calculating the motion sequence planning for the different avatar limbs: legs or arms. First, the algorithm uses the information stored in the ontology concerning the avatar structure and the Degrees Of Freedom (DOFs) to obtain the basic actions for motion planning. Second, this information is used to perform the growth process in the RRT algorithm. Then, all this information is used to produce planning. The plans are generated by a random search for possible motions that respect the structural restrictions of the avatar on kinesiology studies. To avoid a big configuration space search, exploration, exploitation, and hill climbing are used in order to obtain motion plans

    Comparing and Evaluating Real Time Character Engines for Virtual Environments

    Get PDF
    As animated characters increasingly become vital parts of virtual environments, then the engines that drive these characters increasingly become vital parts of virtual environment software. This paper gives an overview of the state of the art in character engines, and proposes a taxonomy of the features that are commonly found in them. This taxonomy can be used as a tool for comparison and evaluation of different engines. In order to demonstrate this we use it to compare three engines. The first is Cal3D, the most commonly used open source engine. We also introduce two engines created by the authors, Piavca and HALCA. The paper ends with a brief discussion of some other popular engines

    Smart Avatars in JackMOO

    Get PDF
    Creation of compelling 3-dimensional, multi-user virtual worlds for education and training applications requires a high degree of realism in the appearance, interaction, and behavior of avatars within the scene. Our goal is to develop and/or adapt existing 3-dimensional technologies to provide training scenarios across the Internet in a form as close as possible to the appearance and interaction expected of live situations with human participants. We have produced a prototype system, JackMOO, which combines Jack, a virtual human system, and LambdaMOO, a multiuser, network-accessible, programmable, interactive server. Jack provides the visual realization of avatars and other objects. LambdaMOO provides the web-accessible communication, programability, and persistent object database. The combined JackMOO allows us to store the richer semantic information necessitated by the scope and range of human actions that an avatar must portray, and to express those actions in the form of imperative sentences. This paper describes JackMOO, its components, and a prototype application with five virtual human agents

    A Parameterized Action Representation for Virtual Human Agents

    Get PDF
    We describe a Parameterized Action Representation (PAR) designed to bridge the gap between natural language instructions and the virtual agents who are to carry them out. The PAR is therefore constructed based jointly on implemented motion capabilities of virtual human figures and linguistic requirements for instruction interpretation. We will illustrate PAR and a real-time execution architecture controlling 3D animated virtual human avatars

    A Telerehabilitation System for the Selection, Evaluation and Remote Management of Therapies

    Get PDF
    Telerehabilitation systems that support physical therapy sessions anywhere can help save healthcare costs while also improving the quality of life of the users that need rehabilitation. The main contribution of this paper is to present, as a whole, all the features supported by the innovative Kinect-based Telerehabilitation System (KiReS). In addition to the functionalities provided by current systems, it handles two new ones that could be incorporated into them, in order to give a step forward towards a new generation of telerehabilitation systems. The knowledge extraction functionality handles knowledge about the physical therapy record of patients and treatment protocols described in an ontology, named TRHONT, to select the adequate exercises for the rehabilitation of patients. The teleimmersion functionality provides a convenient, effective and user-friendly experience when performing the telerehabilitation, through a two-way real-time multimedia communication. The ontology contains about 2300 classes and 100 properties, and the system allows a reliable transmission of Kinect video depth, audio and skeleton data, being able to adapt to various network conditions. Moreover, the system has been tested with patients who suffered from shoulder disorders or total hip replacement.This research was funded by the Spanish Ministry of Economy and Competitiveness grant number FEDER/TIN2016-78011-C4-2R

    Exploring Virtual Reality and Doppelganger Avatars for the Treatment of Chronic Back Pain

    Get PDF
    Cognitive-behavioral models of chronic pain assume that fear of pain and subsequent avoidance behavior contribute to pain chronicity and the maintenance of chronic pain. In chronic back pain (CBP), avoidance of movements often plays a major role in pain perseverance and interference with daily life activities. In treatment, avoidance is often addressed by teaching patients to reduce pain behaviors and increase healthy behaviors. The current project explored the use of personalized virtual characters (doppelganger avatars) in virtual reality (VR), to influence motor imitation and avoidance, fear of pain and experienced pain in CBP. We developed a method to create virtual doppelgangers, to animate them with movements captured from real-world models, and to present them to participants in an immersive cave virtual environment (CAVE) as autonomous movement models for imitation. Study 1 investigated interactions between model and observer characteristics in imitation behavior of healthy participants. We tested the hypothesis that perceived affiliative characteristics of a virtual model, such as similarity to the observer and likeability, would facilitate observers’ engagement in voluntary motor imitation. In a within-subject design (N=33), participants were exposed to four virtual characters of different degrees of realism and observer similarity, ranging from an abstract stickperson to a personalized doppelganger avatar designed from 3d scans of the observer. The characters performed different trunk movements and participants were asked to imitate these. We defined functional ranges of motion (ROM) for spinal extension (bending backward, BB), lateral flexion (bending sideward, BS) and rotation in the horizontal plane (RH) based on shoulder marker trajectories as behavioral indicators of imitation. Participants’ ratings on perceived avatar appearance were recorded in an Autonomous Avatar Questionnaire (AAQ), based on an explorative factor analysis. Linear mixed effects models revealed that for lateral flexion (BS), a facilitating influence of avatar type on ROM was mediated by perceived identification with the avatar including avatar likeability, avatar-observer-similarity and other affiliative characteristics. These findings suggest that maximizing model-observer similarity may indeed be useful to stimulate observational modeling. Study 2 employed the techniques developed in study 1 with participants who suffered from CBP and extended the setup with real-world elements, creating an immersive mixed reality. The research question was whether virtual doppelgangers could modify motor behaviors, pain expectancy and pain. In a randomized controlled between-subject design, participants observed and imitated an avatar (AVA, N=17) or a videotaped model (VID, N=16) over three sessions, during which the movements BS and RH as well as a new movement (moving a beverage crate) were shown. Again, self-reports and ROMs were used as measures. The AVA group reported reduced avoidance with no significant group differences in ROM. Pain expectancy increased in AVA but not VID over the sessions. Pain and limitations did not significantly differ. We observed a moderation effect of group, with prior pain expectancy predicting pain and avoidance in the VID but not in the AVA group. This can be interpreted as an effect of personalized movement models decoupling pain behavior from movement-related fear and pain expectancy by increasing pain tolerance and task persistence. Our findings suggest that personalized virtual movement models can stimulate observational modeling in general, and that they can increase pain tolerance and persistence in chronic pain conditions. Thus, they may provide a tool for exposure and exercise treatments in cognitive behavioral treatment approaches to CBP

    Computational Emotion Model for Virtual Characters

    Get PDF

    Muscleless Motor synergies and actions without movements : From Motor neuroscience to cognitive robotics

    Get PDF
    Emerging trends in neurosciences are providing converging evidence that cortical networks in predominantly motor areas are activated in several contexts related to ‘action’ that do not cause any overt movement. Indeed for any complex body, human or embodied robot inhabiting unstructured environments, the dual processes of shaping motor output during action execution and providing the self with information related to feasibility, consequence and understanding of potential actions (of oneself/others) must seamlessly alternate during goal-oriented behaviors, social interactions. While prominent approaches like Optimal Control, Active Inference converge on the role of forward models, they diverge on the underlying computational basis. In this context, revisiting older ideas from motor control like the Equilibrium Point Hypothesis and synergy formation, this article offers an alternative perspective emphasizing the functional role of a ‘plastic, configurable’ internal representation of the body (body-schema) as a critical link enabling the seamless continuum between motor control and imagery. With the central proposition that both “real and imagined” actions are consequences of an internal simulation process achieved though passive goal-oriented animation of the body schema, the computational/neural basis of muscleless motor synergies (and ensuing simulated actions without movements) is explored. The rationale behind this perspective is articulated in the context of several interdisciplinary studies in motor neurosciences (for example, intracranial depth recordings from the parietal cortex, FMRI studies highlighting a shared cortical basis for action ‘execution, imagination and understanding’), animal cognition (in particular, tool-use and neuro-rehabilitation experiments, revealing how coordinated tools are incorporated as an extension to the body schema) and pertinent challenges towards building cognitive robots that can seamlessly “act, interact, anticipate and understand” in unstructured natural living spaces
    corecore