1,220 research outputs found

    Social influence, negotiation and cognition

    No full text
    To understand how personal agreements can be generated within complexly differentiated social systems, we develop an agent-based computational model of negotiation in which social influence plays a key role in the attainment of social and cognitive integration. The model reflects a view of social influence that is predicated on the interactions among such factors as the agents' cognition, their abilities to initiate and maintain social behaviour, as well as the structural patterns of social relations in which influence unfolds. Findings from a set of computer simulations of the model show that the degree to which agents are influenced depends on the network of relations in which they are located, on the order in which interactions occur, and on the type of information that these interactions convey. We also find that a fundamental role in explaining influence is played by how inclined the agents are to be concilatory with each other, how accurate their beliefs are, and how self-confident they are in dealing with their social interactions. Moreover, the model provides insights into the trade-offs typically involved in the exercise of social influence

    Social Influence and the Generation of Joint Mental Attitudes in Multi-agent Systems

    No full text
    This work examines the social structural and cognitive foundations of joint mental attitudes in complexly differentated multi-agent systems, and incorporates insights from a variety of disciplines, including mainstream Distributed Artificial Intelligence, sociology, administrative science, social psychology, and organisational perspectives. At the heart of this work lies the understanding of the on-going processes by which socially and cognitively differentiated agents come to be socially and cognitively integrated. Here we claim that such understanding rests on the consideration of the nature of the influence processes that affect socialisation intensity. To this end, we provide a logic-based computational model of social influence and we undertake a set of virtual experiments to investigate whether and to what extent this process, when it is played out in a system of negotiating agents, results in a modification of the agents' mental attitudes and impacts on negotiation performance

    Social Mental Shaping: Modelling the Impact of Sociality on Autonomous Agents' Mental States

    No full text
    This paper presents a framework that captures how the social nature of agents that are situated in a multi-agent environment impacts upon their individual mental states. Roles and relationships provide an abstraction upon which we develop the notion of social mental shaping. This allows us to extend the standard Belief-Desire-Intention model to account for how common social phenomena (e.g. cooperation, collaborative problem-solving and negotiation) can be integrated into a unified theoretical perspective that reflects a fully explicated model of the autonomous agent's mental state

    Reflective Artificial Intelligence

    Get PDF
    As Artificial Intelligence (AI) technology advances, we increasingly delegate mental tasks to machines. However, today's AI systems usually do these tasks with an unusual imbalance of insight and understanding: new, deeper insights are present, yet many important qualities that a human mind would have previously brought to the activity are utterly absent. Therefore, it is crucial to ask which features of minds have we replicated, which are missing, and if that matters. One core feature that humans bring to tasks, when dealing with the ambiguity, emergent knowledge, and social context presented by the world, is reflection. Yet this capability is completely missing from current mainstream AI. In this paper we ask what reflective AI might look like. Then, drawing on notions of reflection in complex systems, cognitive science, and agents, we sketch an architecture for reflective AI agents, and highlight ways forward

    Artificial Cognition for Social Human-Robot Interaction: An Implementation

    Get PDF
    © 2017 The Authors Human–Robot Interaction challenges Artificial Intelligence in many regards: dynamic, partially unknown environments that were not originally designed for robots; a broad variety of situations with rich semantics to understand and interpret; physical interactions with humans that requires fine, low-latency yet socially acceptable control strategies; natural and multi-modal communication which mandates common-sense knowledge and the representation of possibly divergent mental models. This article is an attempt to characterise these challenges and to exhibit a set of key decisional issues that need to be addressed for a cognitive robot to successfully share space and tasks with a human. We identify first the needed individual and collaborative cognitive skills: geometric reasoning and situation assessment based on perspective-taking and affordance analysis; acquisition and representation of knowledge models for multiple agents (humans and robots, with their specificities); situated, natural and multi-modal dialogue; human-aware task planning; human–robot joint task achievement. The article discusses each of these abilities, presents working implementations, and shows how they combine in a coherent and original deliberative architecture for human–robot interaction. Supported by experimental results, we eventually show how explicit knowledge management, both symbolic and geometric, proves to be instrumental to richer and more natural human–robot interactions by pushing for pervasive, human-level semantics within the robot's deliberative system
    corecore