943 research outputs found

    Méthodes d'apprentissage inspirées de l'humain pour un tuteur cognitif artificiel

    Get PDF
    Les systĂšmes tuteurs intelligents sont considĂ©rĂ©s comme un remarquable concentrĂ© de technologies qui permettent un processus d'apprentissage. Ces systĂšmes sont capables de jouer le rĂŽle d'assistants voire mĂȘme de tuteur humain. Afin d'y arriver, ces systĂšmes ont besoin de maintenir et d'utiliser une reprĂ©sentation interne de l'environnement. Ainsi, ils peuvent tenir compte des Ă©vĂšnements passĂ©s et prĂ©sents ainsi que de certains aspects socioculturels. ParallĂšlement Ă  l'Ă©volution dynamique de l'environnement, un agent STI doit Ă©voluer en modifiant ses structures et en ajoutant de nouveaux phĂ©nomĂšnes. Cette importante capacitĂ© d'adaptation est observĂ©e dans le cas de tuteurs humains. Les humains sont capables de gĂ©rer toutes ces complexitĂ©s Ă  l'aide de l'attention et du mĂ©canisme de conscience (Baars B. J., 1983, 1988), et (Sloman, A and Chrisley, R., 2003). Toutefois, reconstruire et implĂ©menter des capacitĂ©s humaines dans un agent artificiel est loin des possibilitĂ©s actuelles de la connaissance de mĂȘme que des machines les plus sophistiquĂ©es. Pour rĂ©aliser un comportement humanoĂŻde dans une machine, ou simplement pour mieux comprendre l'adaptabilitĂ© et la souplesse humaine, nous avons Ă  dĂ©velopper un mĂ©canisme d'apprentissage proche de celui de l'homme. Ce prĂ©sent travail dĂ©crit quelques concepts d'apprentissage fondamentaux implĂ©mentĂ©s dans un agent cognitif autonome, nommĂ© CTS (Conscious Tutoring System) dĂ©veloppĂ© dans le GDAC (Dubois, D., 2007). Nous proposons un modĂšle qui Ă©tend un apprentissage conscient et inconscient afin d'accroĂźtre l'autonomie de l'agent dans un environnement changeant ainsi que d'amĂ©liorer sa finesse. ______________________________________________________________________________ MOTS-CLÉS DE L’AUTEUR : Apprentissage, Conscience, Agent cognitif, Codelet

    The use of emotions in the implementation of various types of learning in a cognitive agent

    Get PDF
    Les tuteurs professionnels humains sont capables de prendre en considĂ©ration des Ă©vĂ©nements du passĂ© et du prĂ©sent et ont une capacitĂ© d'adaptation en fonction d'Ă©vĂ©nements sociaux. Afin d'ĂȘtre considĂ©rĂ© comme une technologie valable pour l'amĂ©lioration de l'apprentissage humain, un agent cognitif artificiel devrait pouvoir faire de mĂȘme. Puisque les environnements dynamiques sont en constante Ă©volution, un agent cognitif doit pareillement Ă©voluer et s'adapter aux modifications structurales et aux phĂ©nomĂšnes nouveaux. Par consĂ©quent, l'agent cognitif idĂ©al devrait possĂ©der des capacitĂ©s d'apprentissage similaires Ă  celles que l'on retrouve chez l'ĂȘtre humain ; l'apprentissage Ă©motif, l'apprentissage Ă©pisodique, l'apprentissage procĂ©dural, et l'apprentissage causal. Cette thĂšse contribue Ă  l'amĂ©lioration des architectures d'agents cognitifs. Elle propose 1) une mĂ©thode d'intĂ©gration des Ă©motions inspirĂ©e du fonctionnement du cerveau; et 2) un ensemble de mĂ©thodes d'apprentissage (Ă©pisodique, causale, etc.) qui tiennent compte de la dimension Ă©motionnelle. Le modĂšle proposĂ© que nous avons appelĂ© CELTS (Conscious Emotional Learning Tutoring System) est une extension d'un agent cognitif conscient dans le rĂŽle d'un tutoriel intelligent. Il comporte un module de gestion des Ă©motions qui permet d'attribuer des valences Ă©motionnelles positives ou nĂ©gatives Ă  chaque Ă©vĂ©nement perçu par l'agent. Deux voies de traitement sont prĂ©vues : 1) une voie courte qui permet au systĂšme de rĂ©pondre immĂ©diatement Ă  certains Ă©vĂ©nements sans un traitement approfondis, et 2) une voie longue qui intervient lors de tout Ă©vĂ©nement qui exige la volition. Dans cette perspective, la dimension Ă©motionnelle est considĂ©rĂ©e dans les processus cognitifs de l'agent pour la prise de dĂ©cision et l'apprentissage. L'apprentissage Ă©pisodique dans CELTS est basĂ© sur la thĂ©orie du Multiple Trace Memory consolidation qui postule que lorsque l'on perçoit un Ă©vĂ©nement, l'hippocampe fait une premiĂšre interprĂ©tation et un premier apprentissage. Ensuite, l'information acquise est distribuĂ©e aux diffĂ©rents cortex. Selon cette thĂ©orie, la reconsolidation de la mĂ©moire dĂ©pend toujours de l'hippocampe. Pour simuler de tel processus, nous avons utilisĂ© des techniques de fouille de donnĂ©es qui permettent la recherche de motifs sĂ©quentiels frĂ©quents dans les donnĂ©es gĂ©nĂ©rĂ©es durant chaque cycle cognitif. L'apprentissage causal dans CELTS se produit Ă  l'aide de la mĂ©moire Ă©pisodique. Il permet de trouver les causes et les effets possibles entre diffĂ©rents Ă©vĂ©nements. Il est mise en Ɠuvre grĂące Ă  des algorithmes de recherche de rĂšgles d'associations. Les associations Ă©tablies sont utilisĂ©es pour piloter les interventions tutorielles de CELTS et, par le biais des rĂ©ponses de l'apprenant, pour Ă©valuer les rĂšgles causales dĂ©couvertes. \ud ______________________________________________________________________________ \ud MOTS-CLÉS DE L’AUTEUR : agents cognitifs, Ă©motions, apprentissage Ă©pisodique, apprentissage causal

    Flexibly Instructable Agents

    Full text link
    This paper presents an approach to learning from situated, interactive tutorial instruction within an ongoing agent. Tutorial instruction is a flexible (and thus powerful) paradigm for teaching tasks because it allows an instructor to communicate whatever types of knowledge an agent might need in whatever situations might arise. To support this flexibility, however, the agent must be able to learn multiple kinds of knowledge from a broad range of instructional interactions. Our approach, called situated explanation, achieves such learning through a combination of analytic and inductive techniques. It combines a form of explanation-based learning that is situated for each instruction with a full suite of contextually guided responses to incomplete explanations. The approach is implemented in an agent called Instructo-Soar that learns hierarchies of new tasks and other domain knowledge from interactive natural language instructions. Instructo-Soar meets three key requirements of flexible instructability that distinguish it from previous systems: (1) it can take known or unknown commands at any instruction point; (2) it can handle instructions that apply to either its current situation or to a hypothetical situation specified in language (as in, for instance, conditional instructions); and (3) it can learn, from instructions, each class of knowledge it uses to perform tasks.Comment: See http://www.jair.org/ for any accompanying file

    Systematic Review of Intelligent Tutoring Systems for Hard Skills Training in Virtual Reality Environments

    Get PDF
    Advances in immersive virtual reality (I-VR) technology have allowed for the development of I-VR learning environments (I-VRLEs) with increasing fidelity. When coupled with a sufficiently advanced computer tutor agent, such environments can facilitate asynchronous and self-regulated approaches to learning procedural skills in industrial settings. In this study, we performed a systematic review of published solutions involving the use of an intelligent tutoring system (ITS) to support hard skills training in an I-VRLE. For the seven solutions that qualified for the final analysis, we identified the learning context, the implemented system, as well as the perceptual, cognitive, and guidance features of the utilized tutoring agent. Generally, the I-VRLEs emulated realistic work environments or equipment. The solutions featured either embodied or embedded tutor agents. The agents’ perception was primarily based on either learner actions or learner progress. The agents’ guidance actions varied among the solutions, ranging from simple procedural hints to event interjections. Several agents were capable of answering certain specific questions. The cognition of the majority of agents represented variations on branched programming. A central limitation of all the solutions was that none of the reports detailed empirical studies conducted to compare the effectiveness of the developed training and tutoring solutions.Peer reviewe

    Cognitive Modeling for Computer Animation: A Comparative Review

    Get PDF
    Cognitive modeling is a provocative new paradigm that paves the way towards intelligent graphical characters by providing them with logic and reasoning skills. Cognitively empowered self-animating characters will see in the near future a widespread use in the interactive game, multimedia, virtual reality and production animation industries. This review covers three recently-published papers from the field of cognitive modeling for computer animation. The approaches and techniques employed are very different. The cognition model in the first paper is built on top of Soar, which is intended as a general cognitive architecture for developing systems that exhibit intelligent behaviors. The second paper uses an active plan tree and a plan library to achieve the fast and robust reactivity to the environment changes. The third paper, based on an AI formalism known as the situation calculus, develops a cognitive modeling language called CML and uses it to specify a behavior outline or sketch plan to direct the characters in terms of goals. Instead of presenting each paper in isolation then comparatively analyzing them, we take a top-down approach by first classifying the field into three different categories and then attempting to put each paper into a proper category. Hopefully in this way it can provide a more cohesive, systematic view of cognitive modeling approaches employed in computer animation

    Towards Teachable Autonomous Agents

    Get PDF
    Autonomous discovery and direct instruction are two extreme sources of learning in children, but educational sciences have shown that intermediate approaches such as assisted discovery or guided play resulted in better acquisition of skills. When turning to Artificial Intelligence, the above dichotomy can be translated into the distinction between autonomous agents, which learn in isolation from their own signals, and interactive learning agents which can be taught by social partners but generally lack autonomy. In between should stand teachable autonomous agents: agents that learn from both internal and teaching signals to benefit from the higher efficiency of assisted discovery processes. Designing such agents could result in progress in two ways. First, very concretely, it would offer a way to non-expert users in the real world to drive the learning behavior of agents towards their expectations. Second, more fundamentally, it might be a key step to endow agents with the necessary capabilities to reach general intelligence. The purpose of this paper is to elucidate the key obstacles standing in the way towards the design of such agents. We proceed in four steps. First, we build on a seminal work of Bruner to extract relevant features of the assisted discovery processes happening between a child and a tutor. Second, we highlight how current research on intrinsically motivated agents is paving the way towards teachable and autonomous agents. In particular, we focus on autotelic agents, i.e. agents equipped with forms of intrinsic motivations that enable them to represent, self-generate and pursue their own goals. We argue that such autotelic capabilities from the learner side are key in the discovery process. Third, we adopt a social learning perspective on the interaction between a tutor and a learner to highlight some components that are currently missing to these agents before they can be taught by ordinary people using natural pedagogy. Finally, we provide a list of specific research questions that emerge from the perspective of extending these agents with assisted learning capabilities

    Towards machines that understand people

    Get PDF
    The ability to estimate the state of a human partner is an insufficient basis on which to build cooperative agents. Also needed is an ability to predict how people adapt their behavior in response to an agent's actions. We propose a new approach based on computational rationality, which models humans based on the idea that predictions can be derived by calculating policies that are approximately optimal given human‐like bounds. Computational rationality brings together reinforcement learning and cognitive modeling in pursuit of this goal, facilitating machine understanding of humans
    • 

    corecore