5 research outputs found

    Self Monitoring Goal Driven Autonomy Agents

    Get PDF
    The growing abundance of autonomous systems is driving the need for robust performance. Most current systems are not fully autonomous and often fail when placed in real environments. Via self-monitoring, agents can identify when their own, or externally given, boundaries are violated, thereby increasing their performance and reliability. Specifically, self-monitoring is the identification of unexpected situations that either (1) prohibit the agent from reaching its goal(s) or (2) result in the agent acting outside of its boundaries. Increasingly complex and open environments warrant the use of such robust autonomy (e.g., self-driving cars, delivery drones, and all types of future digital and physical assistants). The techniques presented herein advance the current state of the art in self-monitoring, demonstrating improved performance in a variety of challenging domains. In the aforementioned domains, there is an inability to plan for all possible situations. In many cases all aspects of a domain are not known beforehand, and, even if they were, the cost of encoding them is high. Self-monitoring agents are able to identify and then respond to previously unexpected situations, or never-before-encountered situations. When dealing with unknown situations, one must start with what is expected behavior and use that to derive unexpected behavior. The representation of expectations will vary among domains; in a real-time strategy game like Starcraft, it could be logically inferred concepts; in a mars rover domain, it could be an accumulation of actions\u27 effects. Nonetheless, explicit expectations are necessary to identify the unexpected. This thesis lays the foundation for self-monitoring in goal driven autonomy agents in both rich and expressive domains and in partially observable domains. We introduce multiple techniques for handling such environments. We show how inferred expectations are needed to enable high level planning in real-time strategy games. We show how a hierarchical structure of Goal-driven Autonomy (GDA) enables agents to operate within large state spaces. Within Hierarchical Task Network planning, we show how informed expectations identify states that are likely to prevent an agent from reaching its goals in dynamic domains. Finally, we give a model of expectations for self-monitoring at the meta-cognitive level, and empirical results of agents equipped with and without metacognitive expectations

    Orientamento e giustizia sociale: analisi del rischio di dispersione scolastica e i pensieri anticipatori sul futuro in un gruppo di giovani adolescenti

    Get PDF
    La tesi verte su un'analisi della relazione tra povertà, comportamenti devianti e dispersione scolastica e l'influenza dell'anticipatory thinking per la preparazione futura di adolescenti. L'elaborato fa riferimento alla prospettiva del Life Design e dell'orientamento inclusivo e sostenibile. Il campione di riferimento da me preso è costituito da 79 studenti della scuola secondaria di secondo grado

    Goal Operations for Cognitive Systems

    No full text
    Cognitive agents operating in complex and dynamic domains benefit from significant goal management. Operations on goals include formulation, selection, change, monitoring and delegation in addition to goal achievement. Here we model these operations as transformations on goals. An agent may observe events that affect the agent’s ability to achieve its goals. Hence goal transformations allow unachievable goals to be converted into similar achievable goals. This paper examines an implementation of goal change within a cognitive architecture. We introduce goal transformation at the metacognitive level as well as goal transformation in an automated planner and discuss the costs and benefits of each approach. We evaluate goal change in the MIDCA architecture using a resource-restricted planning domain, demonstrating a performance benefit due to goal operations
    corecore