1,172 research outputs found

    Social influence, negotiation and cognition

    No full text
    To understand how personal agreements can be generated within complexly differentiated social systems, we develop an agent-based computational model of negotiation in which social influence plays a key role in the attainment of social and cognitive integration. The model reflects a view of social influence that is predicated on the interactions among such factors as the agents' cognition, their abilities to initiate and maintain social behaviour, as well as the structural patterns of social relations in which influence unfolds. Findings from a set of computer simulations of the model show that the degree to which agents are influenced depends on the network of relations in which they are located, on the order in which interactions occur, and on the type of information that these interactions convey. We also find that a fundamental role in explaining influence is played by how inclined the agents are to be concilatory with each other, how accurate their beliefs are, and how self-confident they are in dealing with their social interactions. Moreover, the model provides insights into the trade-offs typically involved in the exercise of social influence

    Rational Agents: Prioritized Goals, Goal Dynamics, and Agent Programming Languages with Declarative Goals

    Get PDF
    I introduce a specification language for modeling an agent's prioritized goals and their dynamics. I use the situation calculus along with Reiter's solution to the frame problem and predicates for describing agents' knowledge as my base formalism. I further enhance this language by introducing a new sort of infinite paths. Within this language, I discuss how to systematically specify prioritized goals and how to precisely describe the effects of actions on these goals. These actions include adoption and dropping of goals and subgoals. In this framework, an agent's intentions are formally specified as the prioritized intersection of her goals. The ``prioritized'' qualifier above means that the specification must respect the priority ordering of goals when choosing between two incompatible goals. I ensure that the agent's intentions are always consistent with each other and with her knowledge. I investigate two variants with different commitment strategies. Agents specified using the ``optimizing'' agent framework always try to optimize their intentions, while those specified in the ``committed'' agent framework will stick to their intentions even if opportunities to commit to higher priority goals arise when these goals are incompatible with their current intentions. For these, I study properties of prioritized goals and goal change. I also give a definition of subgoals, and prove properties about the goal-subgoal relationship. As an application, I develop a model for a Simple Rational Agent Programming Language (SR-APL) with declarative goals. SR-APL is based on the ``committed agent'' variant of this rich theory, and combines elements from Belief-Desire-Intention (BDI) APLs and the situation calculus based ConGolog APL. Thus SR-APL supports prioritized goals and is grounded on a formal theory of goal change. It ensures that the agent's declarative goals and adopted plans are consistent with each other and with her knowledge. In doing this, I try to bridge the gap between agent theories and practical agent programming languages by providing a model and specification of an idealized BDI agent whose behavior is closer to what a rational agent does. I show that agents programmed in SR-APL satisfy some key rationality requirements

    GROVE: A computationally grounded model for rational intention revision in BDI agents

    Get PDF
    A fundamental aspect of Belief-Desire-Intention (BDI) agents is intention revision. Agents revise their intentions in order to maintain consistency between their intentions and beliefs, and consistency between intentions. A rational agent must also account for the optimality of their intentions in the case of revision. To that end I present GROVE, a model of rational intention revision for BDI agents. The semantics of a GROVE agent is defined in terms of constraints and preferences on possible future executions of an agent’s plans. I show that GROVE is weakly rational in the sense of Grant et al. and imposes more constraints on executions than the operational semantics for goal lifecycles proposed by Harland et al. As it may not be computationally feasible to consider all possible future executions, I propose a bounded version of GROVE that samples the set of future executions, and state conditions under which bounded GROVE commits to a rational execution

    Procedural-Reasoning Architecture for Applied Behavior Analysis-based Instructions

    Get PDF
    Autism Spectrum Disorder (ASD) is a complex developmental disability affecting as many as 1 in every 88 children. While there is no known cure for ASD, there are known behavioral and developmental interventions, based on demonstrated efficacy, that have become the predominant treatments for improving social, adaptive, and behavioral functions in children. Applied Behavioral Analysis (ABA)-based early childhood interventions are evidence based, efficacious therapies for autism that are widely recognized as effective approaches to remediation of the symptoms of ASD. They are, however, labor intensive and consequently often inaccessible at the recommended levels. Recent advancements in socially assistive robotics and applications of virtual intelligent agents have shown that children with ASD accept intelligent agents as effective and often preferred substitutes for human therapists. This research is nascent and highly experimental with no unifying, interdisciplinary, and integral approach to development of intelligent agents based therapies, especially not in the area of behavioral interventions. Motivated by the absence of the unifying framework, we developed a conceptual procedural-reasoning agent architecture (PRA-ABA) that, we propose, could serve as a foundation for ABA-based assistive technologies involving virtual, mixed or embodied agents, including robots. This architecture and related research presented in this disser- tation encompass two main areas: (a) knowledge representation and computational model of the behavioral aspects of ABA as applicable to autism intervention practices, and (b) abstract architecture for multi-modal, agent-mediated implementation of these practices

    Reflective Artificial Intelligence

    Get PDF
    As Artificial Intelligence (AI) technology advances, we increasingly delegate mental tasks to machines. However, today's AI systems usually do these tasks with an unusual imbalance of insight and understanding: new, deeper insights are present, yet many important qualities that a human mind would have previously brought to the activity are utterly absent. Therefore, it is crucial to ask which features of minds have we replicated, which are missing, and if that matters. One core feature that humans bring to tasks, when dealing with the ambiguity, emergent knowledge, and social context presented by the world, is reflection. Yet this capability is completely missing from current mainstream AI. In this paper we ask what reflective AI might look like. Then, drawing on notions of reflection in complex systems, cognitive science, and agents, we sketch an architecture for reflective AI agents, and highlight ways forward

    A Multi-Agent System framework to support the decision-making in complex real-world domains

    Get PDF
    The aim of this work was to develop a framework capable of supporting the decision-making process in complex real-world domains, such as environmental, industrial or medical domains using a Multi-Agent approach with Rule-based Reasoning. The validation of the framework was done in the environmental domain, particularly in the area of river basins

    Simplifying the development of intelligent agents

    Get PDF
    Intelligent agents is a powerful Artificial Intelligence technology which shows considerable promise as a new paradigm for mainstream software development. However, despite their promise, intelligent agents are still scarce in the market place. A key reason for this is that developing intelligent agent software requires significant training and skill: a typical developer or undergraduate struggles to develop good agent systems using the Belief Desire Intention (BDI) model (or similar models). This paper identifies the concept set which we have found to be important in developing intelligent agent systems and the relationships between these concepts. This concept set was developed with the intention of being clearer, simpler, and easier to use than current approaches.We also describe briefly a (very simplified) example from one of the projects we have worked on (RoboRescue), illustrating the way in which these concepts are important in designing and developing intelligent software agents

    GROVE: A computationally grounded model for rational intention revision in BDI agents

    Get PDF
    A fundamental aspect of Belief-Desire-Intention (BDI) agents is intention revision. Agents revise their intentions in order to maintain consistency between their intentions and beliefs, and consistency between intentions. A rational agent must also account for the optimality of their intentions in the case of revision. To that end I present GROVE, a model of rational intention revision for BDI agents. The semantics of a GROVE agent is defined in terms of constraints and preferences on possible future executions of an agent’s plans. I show that GROVE is weakly rational in the sense of Grant et al. and imposes more constraints on executions than the operational semantics for goal lifecycles proposed by Harland et al. As it may not be computationally feasible to consider all possible future executions, I propose a bounded version of GROVE that samples the set of future executions, and state conditions under which bounded GROVE commits to a rational execution

    Reflections on the EPSRC Principles of Robotics from the New Far-Side of the Law

    Get PDF
    The thought-provoking EPSRC Principles of Robotics stem largely from the reflection on the extent to which robots can affect our lives. These comments highlight the fact that, while the principles may address to a good extent the present technological challenges, they appear to be less immediately suited for future technological and conceptual dares. The first part of the paper is dedicated to the search of the definition of what a robot is. Such a definition should offer the basic conceptual platform on which a normative endeavour, aiming to regulate robots in society, should be based. Concluding that the Principles offer no clear yet flexible insight into such a (meta-) definition, which would allow one to take into account the parameters of informed technological imagination and of envisaged social transformation, the second half of the paper highlights a number of regulatory points of tension. Such tensions, it is argued, stem largely from the absence of an appropriate conceptual platform, influencing negatively the extent to which the principles can be effective in guiding social, ethical, legal and scientific conduct
    • …
    corecore