11,791 research outputs found

    Contract as Deliberation

    Get PDF

    Building machines that learn and think about morality

    Get PDF
    Lake et al. propose three criteria which, they argue, will bring artificial intelligence (AI) systems closer to human cognitive abilities. In this paper, we explore the application of these criteria to a particular domain of human cognition: our capacity for moral reasoning. In doing so, we explore a set of considerations relevant to the development of AI moral decision-making. Our main focus is on the relation between dual-process accounts of moral reasoning and model-free/model-based forms of machine learning. We also discuss how work in embodied and situated cognition could provide a valu- able perspective on future research

    Beliefs and Conflicts in a Real World Multiagent System

    Get PDF
    In a real world multiagent system, where the agents are faced with partial, incomplete and intrinsically dynamic knowledge, conflicts are inevitable. Frequently, different agents have goals or beliefs that cannot hold simultaneously. Conflict resolution methodologies have to be adopted to overcome such undesirable occurrences. In this paper we investigate the application of distributed belief revision techniques as the support for conflict resolution in the analysis of the validity of the candidate beams to be produced in the CERN particle accelerators. This CERN multiagent system contains a higher hierarchy agent, the Specialist agent, which makes use of meta-knowledge (on how the conflicting beliefs have been produced by the other agents) in order to detect which beliefs should be abandoned. Upon solving a conflict, the Specialist instructs the involved agents to revise their beliefs accordingly. Conflicts in the problem domain are mapped into conflicting beliefs of the distributed belief revision system, where they can be handled by proven formal methods. This technique builds on well established concepts and combines them in a new way to solve important problems. We find this approach generally applicable in several domains

    A society of mind approach to cognition and metacognition in a cognitive architecture

    Get PDF
    This thesis investigates the concept of mind as a control system using the "Society of Agents" metaphor. "Society of Agents" describes collective behaviours of simple and intelligent agents. "Society of Mind" is more than a collection of task-oriented and deliberative agents; it is a powerful concept for mind research and can benefit from the use of metacognition. The aim is to develop a self configurable computational model using the concept of metacognition. A six tiered SMCA (Society of Mind Cognitive Architecture) control model is designed that relies on a society of agents operating using metrics associated with the principles of artificial economics in animal cognition. This research investigates the concept of metacognition as a powerful catalyst for control, unify and self-reflection. Metacognition is used on BDI models with respect to planning, reasoning, decision making, self reflection, problem solving, learning and the general process of cognition to improve performance.One perspective on how to develop metacognition in a SMCA model is based on the differentiation between metacognitive strategies and metacomponents or metacognitive aids. Metacognitive strategies denote activities such as metacomphrension (remedial action) and metamanagement (self management) and schema training (meaning full learning over cognitive structures). Metacomponents are aids for the representation of thoughts. To develop an efficient, intelligent and optimal agent through the use of metacognition requires the design of a multiple layered control model which includes simple to complex levels of agent action and behaviours. This SMCA model has designed and implemented for six layers which includes reflexive, reactive, deliberative (BDI), learning (Q-Ieamer), metacontrol and metacognition layers

    Augmenting Agent Platforms to Facilitate Conversation Reasoning

    Full text link
    Within Multi Agent Systems, communication by means of Agent Communication Languages (ACLs) has a key role to play in the co-operation, co-ordination and knowledge-sharing between agents. Despite this, complex reasoning about agent messaging, and specifically about conversations between agents, tends not to have widespread support amongst general-purpose agent programming languages. ACRE (Agent Communication Reasoning Engine) aims to complement the existing logical reasoning capabilities of agent programming languages with the capability of reasoning about complex interaction protocols in order to facilitate conversations between agents. This paper outlines the aims of the ACRE project and gives details of the functioning of a prototype implementation within the Agent Factory multi agent framework

    Policy-based autonomic control service

    Get PDF
    Recently, there has been a considerable interest in policy-based, goal-oriented service management and autonomic computing. Much work is still required to investigate designs and policy models and associate meta-reasoning systems for policy-based autonomic systems. In this paper we outline a proposed autonomic middleware control service used to orchestrate selfhealing of distributed applications. Policies are used to adjust the systems autonomy and define self-healing strategies to stabilize/correct a given system in the event of failures

    Automated Negotiation for Provisioning Virtual Private Networks Using FIPA-Compliant Agents

    No full text
    This paper describes the design and implementation of negotiating agents for the task of provisioning virtual private networks. The agents and their interactions comply with the FIPA specification and they are implemented using the FIPA-OS agent framework. Particular attention is focused on the design and implementation of the negotiation algorithms

    Grit

    Get PDF
    Many of our most important goals require months or even years of effort to achieve, and some never get achieved at all. As social psychologists have lately emphasized, success in pursuing such goals requires the capacity for perseverance, or "grit." Philosophers have had little to say about grit, however, insofar as it differs from more familiar notions of willpower or continence. This leaves us ill-equipped to assess the social and moral implications of promoting grit. We propose that grit has an important epistemic component, in that failures of perseverance are often caused by a significant loss of confidence that one will succeed if one continues to try. Correspondingly, successful exercises of grit often involve a kind of epistemic resilience in the face of failure, injury, rejection, and other setbacks that constitute genuine evidence that success is not forthcoming. Given this, we discuss whether and to what extent displays of grit can be epistemically as well as practically rational. We conclude that they can be (although many are not), and that the rationality of grit will depend partly on features of the context the agent normally finds herself in. In particular, grit-friendly norms of deliberation might be irrational to use in contexts of severe material scarcity or oppression
    • ā€¦
    corecore