20,209 research outputs found

    Task-Oriented Computer Animation of Human Figures

    Get PDF
    The effective computer animation of human figures is an endeavor with a relatively short history. The earliest attempts involved simple geometries and simple animation techniques which failed to yield convincing motions. Within the last decade, both modeling and animation tools have evolved more realistic figures and motions. A large software project has been under development in the University of Pennsylvania Computer Graphics Research Facility since 1982 to create an interactive system which assists an animator or human factors engineer to graphically simulate the task-oriented activities of several human agents. An interactive system called TEMPUS and its high performance successor is outlined which is intended to graphically simulate the task-oriented activities of several: human agents. Besides an anthropometric database, TEMPUS offers multiple constraint-based joint positioning, dynamic simulation, real-time motion playback, a flexible three-dimensional user interface, and hooks for artificial intelligence motion control methods including hierarchical simulation, and natural language specification of movements. The overall organization of this project and some specific components will be discussed

    Elckerlyc goes mobile: enabling technology for ECAs in mobile applications

    Get PDF
    The fast growth of computational resources and speech technology available on mobile devices makes it pos- sible for users of these devices to interact with service sys- tems through natural dialogue. These systems are sometimes perceived as social agents and presented by means of an animated embodied conversational agent (ECA). To take the full advantage of the power of ECAs in service systems, it is important to support real-time, online and responsive interaction with the system through the ECA. The design of responsive animated conversational agents is a daunting task. Elckerlyc is a model-based platform for the specification and animation of synchronised multimodal responsive animated agents. This paper presents a new light-weight PictureEngine that allows this platform to embed an ECA in the user interface of mobile applications. The ECA can be specified by using the behavior markup language (BML). An application and user evaluations of Elckerlyc and the PictureEngine in a mobile embedded digital coach is presented

    Elckerlyc goes mobile - Enabling natural interaction in mobile user interfaces

    Get PDF
    The fast growth of computational resources and speech technology available on mobile devices makes it possible to entertain users of these devices in having a natural dialogue with service systems. These systems are sometimes perceived as social agents and this can be supported by presenting them on the interface by means of an animated embodied conversational agent. To take the full advantage of the power of embodied conversational agents in service systems it is important to support real-time, online and responsive interaction with the system through the embodied conversational agent. The design of responsive animated conversational agents is a daunting task. Elckerlyc is a model-based platform for the speciļ¬æcation and animation of synchronised multi-modal responsive animated agents. This paper presents a new light-weight PictureEngine that allows to run this platform in mobile applications. We describe the integration of the PictureEngine in the user interface of two different coaching applications and discuss the ļ¬ændings from user evaluations. We also conducted a study to evaluate an editing tool for the speciļ¬æcation of the agentā€™s communicative behaviour. Twenty one participants had to specify the behaviour of an embodied conversational agent using the PictureEngine. We may conclude that this new lightweight back-end engine for the Elckerlyc platform makes it easier to build embodied conversational interfaces for mobile devices

    Direct Manipulation-like Tools for Designing Intelligent Virtual Agents

    Get PDF
    If intelligent virtual agents are to become widely adopted it is vital that they can be designed using the user friendly graphical tools that are used in other areas of graphics. However, extending this sort of tool to autonomous, interactive behaviour, an area with more in common with artificial intelligence, is not trivial. This paper discusses the issues involved in creating user-friendly design tools for IVAs and proposes an extension of the direct manipulation methodology to IVAs. It also presents an initial implementation of this methodology

    Elckerlyc in practice - on the integration of a BML Realizer in real applications

    Get PDF
    Building a complete virtual human application from scratch is a daunting task, and it makes sense to rely on existing platforms for behavior generation. When building such an interactive application, one needs to be able to adapt and extend the capabilities of the virtual human offered by the platform, without having to make invasive modications to the platform itself. This paper describes how Elckerlyc, a novel platform for controlling a virtual human, offers these possibilities

    Integrating Autonomous Behaviour and User Control for Believable Agents

    Get PDF

    Animated virtual agents to cue user attention: comparison of static and dynamic deictic cues on gaze and touch responses

    Get PDF
    This paper describes an experiment developed to study the performance of virtual agent animated cues within digital interfaces. Increasingly, agents are used in virtual environments as part of the branding process and to guide user interaction. However, the level of agent detail required to establish and enhance efficient allocation of attention remains unclear. Although complex agent motion is now possible, it is costly to implement and so should only be routinely implemented if a clear benefit can be shown. Pevious methods of assessing the effect of gaze-cueing as a solution to scene complexity have relied principally on two-dimensional static scenes and manual peripheral inputs. Two experiments were run to address the question of agent cues on human-computer interfaces. Both experiments measured the efficiency of agent cues analyzing participant responses either by gaze or by touch respectively. In the first experiment, an eye-movement recorder was used to directly assess the immediate overt allocation of attention by capturing the participantā€™s eyefixations following presentation of a cueing stimulus. We found that a fully animated agent could speed up user interaction with the interface. When user attention was directed using a fully animated agent cue, users responded 35% faster when compared with stepped 2-image agent cues, and 42% faster when compared with a static 1-image cue. The second experiment recorded participant responses on a touch screen using same agent cues. Analysis of touch inputs confirmed the results of gaze-experiment, where fully animated agent made shortest time response with a slight decrease on the time difference comparisons. Responses to fully animated agent were 17% and 20% faster when compared with 2-image and 1-image cue severally. These results inform techniques aimed at engaging usersā€™ attention in complex scenes such as computer games and digital transactions within public or social interaction contexts by demonstrating the benefits of dynamic gaze and head cueing directly on the usersā€™ eye movements and touch responses
    • ā€¦
    corecore