5 research outputs found

    Clashes in the Infosphere, General Intelligence, and Metacognition: Final project report

    Get PDF
    Humans confront the unexpected every day, deal with it, and often learn from it. AI agents, on the other hand, are typically brittle—they tend to break down as soon as something happens for which their creators did not explicitly anticipate. The central focus of our research project is this problem of brittleness which may also be the single most important problem facing AI research. Our approach to brittleness is to model a common method that humans use to deal with the unexpected, namely to note occurrences of the unexpected (i.e., anomalies), to assess any problem signaled by the anomaly, and then to guide a response or solution that resolves it. The result is the Note-Assess-Guide procedure of what we call the Metacognitive Loop or MCL. To do this, we have implemented MCL-based systems that enable agents to help themselves; they must establish expectations and monitor them, note failed expectations, assess their causes, and then choose appropriate responses. Activities for this project have developed and refined a human-dialog agent and a robot navigation system to test the generality of this approach

    A METAMEMORY MODEL FOR AN INTELLIGENT TUTORING SYSTEM UN MODELO DE METAMEMORIA PARA UN SISTEMA TUTOR INTELIGENTE

    Get PDF
      Metamemory refers to the processes involved in self-regulation or self-awareness of memory. In this paper we describe a novel rule-based architecture of metamemory named M2-Acch. M2-Acch consists of a cycle of reasoning about events that occur in long-term memory (LTM) in an intelligent tutoring system. M2-Acch is composed of a three layer structure: static layer, functional layer and information layer. The structural components of each layer model are described using formal definitions. M2-Acch uses confidence judgments for recommending search strategies for adaptation to changes in the information retrieval constraints. An intelligent tutoring system named FUNPRO was implemented and validated. The results of the experimental tests show that M2-Acch can be used as a valid tool for adapting to changes

    Analysis of models and metacognitive architectures in intelligent systems

    Get PDF
    Recently Intelligent Systems (IS) have highly increased the autonomy of their decisions, this has been achieved by improving metacognitive skills. The term metacognition in Artifi cial Intelligence (AI) refers to the capability of IS to monitor and control their own learning processes. This paper describes different models used to address the implementation of metacognition in IS. Then, we present a comparative analysis among the different models of metacognition. As well as, a discussion about the following categories of analysis: types of metacognition architectural support of metacognition components, architectural cores and computational implementations

    Goal Reasoning: Papers from the ACS workshop

    Get PDF
    This technical report contains the 11 accepted papers presented at the Workshop on Goal Reasoning, which was held as part of the 2013 Conference on Advances in Cognitive Systems (ACS-13) in Baltimore, Maryland on 14 December 2013. This is the third in a series of workshops related to this topic, the first of which was the AAAI-10 Workshop on Goal-Directed Autonomy while the second was the Self-Motivated Agents (SeMoA) Workshop, held at Lehigh University in November 2012. Our objective for holding this meeting was to encourage researchers to share information on the study, development, integration, evaluation, and application of techniques related to goal reasoning, which concerns the ability of an intelligent agent to reason about, formulate, select, and manage its goals/objectives. Goal reasoning differs from frameworks in which agents are told what goals to achieve, and possibly how goals can be decomposed into subgoals, but not how to dynamically and autonomously decide what goals they should pursue. This constraint can be limiting for agents that solve tasks in complex environments when it is not feasible to manually engineer/encode complete knowledge of what goal(s) should be pursued for every conceivable state. Yet, in such environments, states can be reached in which actions can fail, opportunities can arise, and events can otherwise take place that strongly motivate changing the goal(s) that the agent is currently trying to achieve. This topic is not new; researchers in several areas have studied goal reasoning (e.g., in the context of cognitive architectures, automated planning, game AI, and robotics). However, it has infrequently been the focus of intensive study, and (to our knowledge) no other series of meetings has focused specifically on goal reasoning. As shown in these papers, providing an agent with the ability to reason about its goals can increase performance measures for some tasks. Recent advances in hardware and software platforms (involving the availability of interesting/complex simulators or databases) have increasingly permitted the application of intelligent agents to tasks that involve partially observable and dynamically-updated states (e.g., due to unpredictable exogenous events), stochastic actions, multiple (cooperating, neutral, or adversarial) agents, and other complexities. Thus, this is an appropriate time to foster dialogue among researchers with interests in goal reasoning. Research on goal reasoning is still in its early stages; no mature application of it yet exists (e.g., for controlling autonomous unmanned vehicles or in a deployed decision aid). However, it appears to have a bright future. For example, leaders in the automated planning community have specifically acknowledged that goal reasoning has a prominent role among intelligent agents that act on their own plans, and it is gathering increasing attention from roboticists and cognitive systems researchers. In addition to a survey, the papers in this workshop relate to, among other topics, cognitive architectures and models, environment modeling, game AI, machine learning, meta-reasoning, planning, selfmotivated systems, simulation, and vehicle control. The authors discuss a wide range of issues pertaining to goal reasoning, including representations and reasoning methods for dynamically revising goal priorities. We hope that readers will find that this theme for enhancing agent autonomy to be appealing and relevant to their own interests, and that these papers will spur further investigations on this important yet (mostly) understudied topic

    Goal Reasoning: Papers from the ACS Workshop

    Get PDF
    This technical report contains the 14 accepted papers presented at the Workshop on Goal Reasoning, which was held as part of the 2015 Conference on Advances in Cognitive Systems (ACS-15) in Atlanta, Georgia on 28 May 2015. This is the fourth in a series of workshops related to this topic, the first of which was the AAAI-10 Workshop on Goal-Directed Autonomy; the second was the Self-Motivated Agents (SeMoA) Workshop, held at Lehigh University in November 2012; and the third was the Goal Reasoning Workshop at ACS-13 in Baltimore, Maryland in December 2013
    corecore