8 research outputs found

    Data Driven Approach to Multi-Agent Low Level Behavior Generation in Medical Simulations

    Get PDF
    A multi-agent scenario generation framework is designed, implemented and evaluated in the context of a preventive medicine education virtual reality system with data collected from a sensor network at the University of Iowa Hospital. An agent in the framework is a virtual human that represents a healthcare worker. The agent is able to make certain decisions based on the information it gathers from its surroundings in the virtual environment. Distributed sensor networks are becoming very commonplace in public areas for public safety and surveillance purposes. The data collected from these sensors can be visualized in a multi-agent simulation. The various components of the framework include generation of unique agents from the sensor data and low level behaviors such as path determination, directional traffic flows, collision avoidance and overtaking. The framework also includes a facility to prevent foot slippage with detailed animations during the travel period of the agents. Preventive medicine education is the process of educating health care workers about procedures that could mitigate the spread of infections in a hospital. We built an application called the 5 Moments of Hand Hygiene that educates health care workers on the times they are supposed to wash their hands when dealing with a patient. The purpose of the application was to increase the compliance rates of this CDC mandated preventive measure in hospitals across the nation. A user study was performed with 18 nursing students and 5 full-time nurses at the Clemson University School of Nursing to test the usability of the application developed and the realism of the scenario generation framework. The results of the study suggest that the behaviors generated by the framework are realistic and believable enough for use in preventive medicine education applications

    A study of human performance in recognizing expressive hand movements

    Full text link

    A Study Of The Effects Of Computer Animated Character Body Style On Perception Of Facial Expression

    Get PDF
    This study examined if there is a difference in viewer perception of computer animated character facial expressions based on character body style, specifically, realistic and stylized character body styles. Participants viewed twenty clips of computer animated characters expressing one of five emotions: sadness, happiness, anger, surprise and fear. They then named the emotion and rated the sincerity, intensity, and typicality of each clip. The results indicated that for recognition, participants were more slightly more likely to recognize a stylized character although it was not a significant difference. Stylized characters were on average rated higher for sincerity and intensity and realistic characters were on average rated higher for typicality. A significant difference in ratings was shown with fear (within sincerity and typicality) having realistic characters rated higher, happiness (within sincerity and intensity) having stylized characters rated higher and stylized being rated higher once for anger (stylized) and realistic (typicality) being rated once for anger. Other differences were also noted within the dependent variables. Based on the data collected in this study, overall there was not a significant difference in participant ratings between the two character styles

    Data-driven techniques for animating virtual characters

    Get PDF
    One of the key goals of current research in data-driven computer animation is the synthesis of new motion sequences from existing motion data. This thesis presents three novel techniques for synthesising the motion of a virtual character from existing motion data and develops a framework of solutions to key character animation problems. The first motion synthesis technique presented is based on the character’s locomotion composition process. This technique examines the ability of synthesising a variety of character’s locomotion behaviours while easily specified constraints (footprints) are placed in the three-dimensional space. This is achieved by analysing existing motion data, and by assigning the locomotion behaviour transition process to transition graphs that are responsible for providing information about this process. However, virtual characters should also be able to animate according to different style variations. Therefore, a second technique to synthesise real-time style variations of character’s motion. A novel technique is developed that uses correlation between two different motion styles, and by assigning the motion synthesis process to a parameterised maximum a posteriori (MAP) framework retrieves the desire style content of the input motion in real-time, enhancing the realism of the new synthesised motion sequence. The third technique presents the ability to synthesise the motion of the character’s fingers either o↵-line or in real-time during the performance capture process. The advantage of both techniques is their ability to assign the motion searching process to motion features. The presented technique is able to estimate and synthesise a valid motion of the character’s fingers, enhancing the realism of the input motion. To conclude, this thesis demonstrates that these three novel techniques combine in to a framework that enables the realistic synthesis of virtual character movements, eliminating the post processing, as well as enabling fast synthesis of the required motion

    Capture and generalisation of close interaction with objects

    Get PDF
    Robust manipulation capture and retargeting has been a longstanding goal in both the fields of animation and robotics. In this thesis I describe a new approach to capture both the geometry and motion of interactions with objects, dealing with the problems of occlusion by the use of magnetic systems, and performing the reconstruction of the geometry by an RGB-D sensor alongside visual markers. This ‘interaction capture’ allows the scene to be described in terms of the spatial relationships between the character and the object using novel topological representations such as the Electric Parameters, which parametrise the outer space of an object using properties of the surface of the object. I describe the properties of these representations for motion generalisation and discuss how they can be applied to the problems of human-like motion generation and programming by demonstration. These generalised interactions are shown to be valid by demonstration of retargeting grasping and manipulation to robots with dissimilar kinematics and morphology using only local, gradient-based planning
    corecore