Skip to main content
Article thumbnail
Location of Repository

Logic, self-awareness and self-improvement: The metacognitive loop and the problem of brittleness

By Dr. Michael L. Anderson and Prof. Donald R. Perlis


This essay describes a general approach to building perturbation-tolerant autonomous systems, based on the conviction that artificial agents should be able notice when something is amiss, assess the anomaly, and guide a solution into place. We call this basic strategy of self-guided learning the metacognitive loop; it involves the system monitoring, reasoning about, and, when necessary, altering its own decision-making components. In this essay, we (a) argue that equipping agents with a metacognitive loop can help to overcome the brittleness problem, (b) detail the metacognitive loop and its relation to our ongoing work on time-sensitive commonsense reasoning, (c) describe specific, implemented systems whose perturbation tolerance was improved by adding a metacognitive loop, and (d) outline both short-term and long-term research agendas

Topics: Artificial Intelligence
Year: 2005
OAI identifier:

Suggested articles


  1. (1996). A robust system for natural spoken dialogue.
  2. (1994). Actions and Events in Interval Temporal Logic.
  3. (2004). Active logic for more effective human-computer interaction and other commonsense applications.
  4. (2003). Agenda Relevance: A Study in Formal Pragmatics.
  5. (1991). and Eric Wefald. Principles of metareasoning. doi
  6. (1992). Belief Revision.
  7. (2003). Embodied cognition: A field guide.
  8. Enhancing reinforcement learning with metacognitive monitoring and control for improved perturbation tolerance,
  9. (2000). Hybrid Neural Systems.
  10. (1994). Integrating Rules and Connectionism for Robust Commonsense Reasoning. doi
  11. (1999). Khemdut Purang, Michael O’Donovan-Anderson, and Don Perlis. Representations of dialogue state for domain and task independent metadialogue.
  12. (1988). Knowledge in Flux: Modeling the Dynamics of Epistemic States.
  13. (1989). Learning from Delayed Rewards. PhD thesis,
  14. (2004). Logical formalizations and commonsense reasoning,
  15. (2002). Logical Foundations of Default Reasoning.
  16. On the reasoning of real-word agents: Toward a semantics for active logic, in preparation.
  17. (1990). Reasoning situated in time I: Basic concepts.
  18. (1995). Reinforcement Learning: An Introduction. doi
  19. (1969). Reversal-shift behavior: Some basic issues.
  20. (2003). RGL study in a hybrid real-time system.
  21. (2000). Supplementing neural reinforcement learning with symbolic methods.
  22. (2003). Talking to computers.
  23. (2001). The extraction of planning knowledge from reinforcement learning neural networks.
  24. (2002). The use-mention distinction and its importance to HCI.
  25. (2002). Timesituated agency: Active Logic and intention formation.
  26. (2003). Towards domainindependent, task-oriented, conversational adequacy.
  27. (1962). Vertical and horizontal processes in problem solving.

To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.