1,051 research outputs found

    Towards responsive Sensitive Artificial Listeners

    Get PDF
    This paper describes work in the recently started project SEMAINE, which aims to build a set of Sensitive Artificial Listeners – conversational agents designed to sustain an interaction with a human user despite limited verbal skills, through robust recognition and generation of non-verbal behaviour in real-time, both when the agent is speaking and listening. We report on data collection and on the design of a system architecture in view of real-time responsiveness

    Building Parameterized Action Representations From Observation

    Get PDF
    Virtual worlds may be inhabited by intelligent agents who interact by performing various simple and complex actions. If the agents are human-like (embodied), their actions may be generated from motion capture or procedural animation. In this thesis, we introduce the CaPAR interactive system which combines both these approaches to generate agent-size neutral representations of actions within a framework called Parameterized Action Representation (PAR). Just as a person may learn a new complex physical task by observing another person doing it, our system observes a single trial of a human performing some complex task that involves interaction with self or other objects in the environment and automatically generates semantically rich information about the action. This information can be used to generate similar constrained motions for agents of different sizes. Human movement is captured by electromagnetic sensors. By computing motion zerocrossings and geometric spatial proximities, we isolate significant events, abstract both spatial and visual constraints from an agent\u27s action, and segment a given complex action into several simpler subactions. We analyze each independently and build individual PARs for them. Several PARs can be combined into one complex PAR representing the original activity. Within each motion segment, semantic and style information is extracted. The style information is used to generate the same constrained motion in other differently sized virtual agents by copying the end-effector velocity profile, by following a similar end-effector trajectory, or by scaling and mapping force interactions between the agent and an object. The semantic information is stored in a PAR. The extracted style and constraint information is stored in the corresponding agent and object models

    Automated generation of geometrically-precise and semantically-informed virtual geographic environnements populated with spatially-reasoning agents

    Get PDF
    La Géo-Simulation Multi-Agent (GSMA) est un paradigme de modélisation et de simulation de phénomènes dynamiques dans une variété de domaines d'applications tels que le domaine du transport, le domaine des télécommunications, le domaine environnemental, etc. La GSMA est utilisée pour étudier et analyser des phénomènes qui mettent en jeu un grand nombre d'acteurs simulés (implémentés par des agents) qui évoluent et interagissent avec une représentation explicite de l'espace qu'on appelle Environnement Géographique Virtuel (EGV). Afin de pouvoir interagir avec son environnement géographique qui peut être dynamique, complexe et étendu (à grande échelle), un agent doit d'abord disposer d'une représentation détaillée de ce dernier. Les EGV classiques se limitent généralement à une représentation géométrique du monde réel laissant de côté les informations topologiques et sémantiques qui le caractérisent. Ceci a pour conséquence d'une part de produire des simulations multi-agents non plausibles, et, d'autre part, de réduire les capacités de raisonnement spatial des agents situés. La planification de chemin est un exemple typique de raisonnement spatial dont un agent pourrait avoir besoin dans une GSMA. Les approches classiques de planification de chemin se limitent à calculer un chemin qui lie deux positions situées dans l'espace et qui soit sans obstacle. Ces approches ne prennent pas en compte les caractéristiques de l'environnement (topologiques et sémantiques), ni celles des agents (types et capacités). Les agents situés ne possèdent donc pas de moyens leur permettant d'acquérir les connaissances nécessaires sur l'environnement virtuel pour pouvoir prendre une décision spatiale informée. Pour répondre à ces limites, nous proposons une nouvelle approche pour générer automatiquement des Environnements Géographiques Virtuels Informés (EGVI) en utilisant les données fournies par les Systèmes d'Information Géographique (SIG) enrichies par des informations sémantiques pour produire des GSMA précises et plus réalistes. De plus, nous présentons un algorithme de planification hiérarchique de chemin qui tire avantage de la description enrichie et optimisée de l'EGVI pour fournir aux agents un chemin qui tient compte à la fois des caractéristiques de leur environnement virtuel et de leurs types et capacités. Finalement, nous proposons une approche pour la gestion des connaissances sur l'environnement virtuel qui vise à supporter la prise de décision informée et le raisonnement spatial des agents situés

    Knowledge-based vision and simple visual machines

    Get PDF
    The vast majority of work in machine vision emphasizes the representation of perceived objects and events: it is these internal representations that incorporate the 'knowledge' in knowledge-based vision or form the 'models' in model-based vision. In this paper, we discuss simple machine vision systems developed by artificial evolution rather than traditional engineering design techniques, and note that the task of identifying internal representations within such systems is made difficult by the lack of an operational definition of representation at the causal mechanistic level. Consequently, we question the nature and indeed the existence of representations posited to be used within natural vision systems (i.e. animals). We conclude that representations argued for on a priori grounds by external observers of a particular vision system may well be illusory, and are at best place-holders for yet-to-be-identified causal mechanistic interactions. That is, applying the knowledge-based vision approach in the understanding of evolved systems (machines or animals) may well lead to theories and models that are internally consistent, computationally plausible, and entirely wrong

    Simulation and analysis of complex human tasks

    Get PDF
    We discuss how the combination of a realistic human figure with a high-level behavioral control interface allow the construction of detailed simulations of humans performing manual tasks from which inferences about human performance requirements can be made. The Jack human modeling environment facilitates the real-time simulation of humans performing sequences of tasks such as walking, lifting, reaching, and grasping in a complex simulated environment. Analysis capabilities include strength, reachability, and visibility; moreover results from these tests can affect an unfolding simulation

    Simulating Humans: Computer Graphics, Animation, and Control

    Get PDF
    People are all around us. They inhabit our home, workplace, entertainment, and environment. Their presence and actions are noted or ignored, enjoyed or disdained, analyzed or prescribed. The very ubiquitousness of other people in our lives poses a tantalizing challenge to the computational modeler: people are at once the most common object of interest and yet the most structurally complex. Their everyday movements are amazingly uid yet demanding to reproduce, with actions driven not just mechanically by muscles and bones but also cognitively by beliefs and intentions. Our motor systems manage to learn how to make us move without leaving us the burden or pleasure of knowing how we did it. Likewise we learn how to describe the actions and behaviors of others without consciously struggling with the processes of perception, recognition, and language

    Doing What You\u27re Told: Following Task Instructions in Changing, but Hospitable Environments

    Get PDF
    The AnimNL project (Anim ation from N atural L anguage) has as its goal the automatic creation of animated task simulations from natural-language instructions. The question addressed in this paper is how agents can perform tasks in environments about which they have only partial relevant knowledge. The solution we describe involves enabling such agents to * develop expectations through instruction understanding and plan inference, and use those expectations in deciding how to act; * exploit generalized abilities in order to deal with novel geometric situations. The AnimNL project builds on an animation system, Jack™, that has been developed at the Computer Graphics Research Lab at the University of Pennsylvania, and draws upon a range of recent work in Natural Language semantics, planning and plan inference, philosophical studies of intention, reasoning about knowledge and action, and subsumption architectures for autonomous agents

    Animation From Instructions

    Get PDF
    We believe that computer animation in the form of narrated animated simulations can provide an engaging, effective and flexible medium for instructing agents in the performance of tasks. However, we argue that the only way to achieve the kind of flexibility needed to instruct agents of varying capabilities to perform tasks with varying demands in work places of varying layout is to drive both animation and narration from a common representation that embodies the same conceptualization of tasks and actions as Natural Language itself. To this end, we are exploring the use of Natural Language instructions to drive animated simulations. In this paper, we discuss the relationship between instructions and behavior that underlie our work and the overall structure of our system. We then describe in some what more detail three aspects of the system - the representation used by the Simulator, the operation of the Simulator and the Motion Generators used in the system

    Design for manufacturability : a feature-based agent-driven approach

    Get PDF
    corecore