6,831 research outputs found

    The social brain: allowing humans to boldly go where no other species has been

    Get PDF
    The biological basis of complex human social interaction and communication has been illuminated through a coming together of various methods and disciplines. Among these are comparative studies of other species, studies of disorders of social cognition and developmental psychology. The use of neuroimaging and computational models has given weight to speculations about the evolution of social behaviour and culture in human societies. We highlight some networks of the social brain relevant to two-person interactions and consider the social signals between interacting partners that activate these networks.Wemake a case for distinguishing between signals that automatically trigger interaction and cooperation and ostensive signals that are used deliberately.We suggest that this ostensive signalling is needed for ‘closing the loop’ in two-person interactions, where the partners each know that they have the intention to communicate. The use of deliberate social signals can serve to increase reputation and trust and facilitates teaching. This is likely to be a critical factor in the steep cultural ascent ofmankind

    Fuzzy argumentation for trust

    No full text
    In an open Multi-Agent System, the goals of agents acting on behalf of their owners often conflict with each other. Therefore, a personal agent protecting the interest of a single user cannot always rely on them. Consequently, such a personal agent needs to be able to reason about trusting (information or services provided by) other agents. Existing algorithms that perform such reasoning mainly focus on the immediate utility of a trusting decision, but do not provide an explanation of their actions to the user. This may hinder the acceptance of agent-based technologies in sensitive applications where users need to rely on their personal agents. Against this background, we propose a new approach to trust based on argumentation that aims to expose the rationale behind such trusting decisions. Our solution features a separation of opponent modeling and decision making. It uses possibilistic logic to model behavior of opponents, and we propose an extension of the argumentation framework by Amgoud and Prade to use the fuzzy rules within these models for well-supported decisions

    Self-Governing Hybrid Societies and Deception

    Get PDF
    Self-governing hybrid societies are multi-agent systems where humans and machines interact by adapting to each other’s behaviour. Advancements in Artificial Intelligence (AI) have brought an increasing hybridisation of our societies, where one particular type of behaviour has become more and more prevalent, namely deception. Deceptive behaviour as the propagation of disinformation can have negative effects on a society's ability to govern itself. However, self-governing societies have the ability to respond to various phenomena. In this paper we explore how they respond to the phenomenon of deception from an evolutionary perspective considering that agents have limited adaptation skills. Will hybrid societies fail to govern deceptive behaviour and reach a Tragedy of The Digital Commons? Or will they manage to avoid it through cooperation? How resilient are they against large-scale deceptive attacks? We provide a tentative answer to some of these questions through the lens of evolutionary agent-based modelling, based on the scientific literature on deceptive AI and public goods games

    Deception

    Get PDF

    TRAVOS: Trust and Reputation in the Context of Inaccurate Information Sources

    No full text
    In many dynamic open systems, agents have to interact with one another to achieve their goals. Here, agents may be self-interested, and when trusted to perform an action for another, may betray that trust by not performing the action as required. In addition, due to the size of such systems, agents will often interact with other agents with which they have little or no past experience. There is therefore a need to develop a model of trust and reputation that will ensure good interactions among software agents in large scale open systems. Against this background, we have developed TRAVOS (Trust and Reputation model for Agent-based Virtual OrganisationS) which models an agent's trust in an interaction partner. Specifically, trust is calculated using probability theory taking account of past interactions between agents, and when there is a lack of personal experience between agents, the model draws upon reputation information gathered from third parties. In this latter case, we pay particular attention to handling the possibility that reputation information may be inaccurate

    Trust and deception in multi-agent trading systems: a logical viewpoint

    Get PDF
    Trust and deception have been of concern to researchers since the earliest research into multi-agent trading systems (MATS). In an open trading environment, trust can be established by external mechanisms e.g. using secret keys or digital signatures or by internal mechanisms e.g. learning and reasoning from experience. However, in a MATS, where distrust exists among the agents, and deception might be used between agents, how to recognize and remove fraud and deception in MATS becomes a significant issue in order to maintain a trustworthy MATS environment. This paper will propose an architecture for a multi-agent trading system (MATS) and explore how fraud and deception changes the trust required in a multi-agent trading system/environment. This paper will also illustrate several forms of logical reasoning that involve trust and deception in a MATS. The research is of significance in deception recognition and trust sustainability in e-business and e-commerce

    Secrecy in Educational Practices: Enacting Nested Black Boxes in Cheating and Deception Detection Systems

    Get PDF
    This paper covers secrecy from the vantage point of recent technological initiatives designed to detect cheating and deception in educational contexts as well as to monitor off-campus social media speech code violations. Many of these systems are developed and implemented by third-party corporate entities who claim practices to be proprietary and secret. The outsourcers involved in these efforts have provided one level of secrecy and educational administrators involved yet another level, thus constructing nested black boxes. Also discussed in this paper is the “paranoid style” of administration, often supported by the surveillance and construction of rosters of potential non-conformists, such as alleged cheaters and speech code violators. The educational technologies described in this article are increasingly applied to workplace practices, with young people being trained in what is deemed acceptable conduct. Secrecy can serve to alter the character of relationships within the educational institutions involved as well as inside the workplaces in which the approaches are increasingly being integrated

    Dynamic Monopolies in Colored Tori

    Full text link
    The {\em information diffusion} has been modeled as the spread of an information within a group through a process of social influence, where the diffusion is driven by the so called {\em influential network}. Such a process, which has been intensively studied under the name of {\em viral marketing}, has the goal to select an initial good set of individuals that will promote a new idea (or message) by spreading the "rumor" within the entire social network through the word-of-mouth. Several studies used the {\em linear threshold model} where the group is represented by a graph, nodes have two possible states (active, non-active), and the threshold triggering the adoption (activation) of a new idea to a node is given by the number of the active neighbors. The problem of detecting in a graph the presence of the minimal number of nodes that will be able to activate the entire network is called {\em target set selection} (TSS). In this paper we extend TSS by allowing nodes to have more than two colors. The multicolored version of the TSS can be described as follows: let GG be a torus where every node is assigned a color from a finite set of colors. At each local time step, each node can recolor itself, depending on the local configurations, with the color held by the majority of its neighbors. We study the initial distributions of colors leading the system to a monochromatic configuration of color kk, focusing on the minimum number of initial kk-colored nodes. We conclude the paper by providing the time complexity to achieve the monochromatic configuration

    The evolution of deception.

    Get PDF
    Funder: MIT Media LabFunder: King's College LondonFunder: Ethics and Governance of AI FundDeception plays a critical role in the dissemination of information, and has important consequences on the functioning of cultural, market-based and democratic institutions. Deception has been widely studied within the fields of philosophy, psychology, economics and political science. Yet, we still lack an understanding of how deception emerges in a society under competitive (evolutionary) pressures. This paper begins to fill this gap by bridging evolutionary models of social good-public goods games (PGGs)-with ideas from interpersonal deception theory (Buller and Burgoon 1996 Commun. Theory 6, 203-242. (doi:10.1111/j.1468-2885.1996.tb00127.x)) and truth-default theory (Levine 2014 J. Lang. Soc. Psychol. 33, 378-392. (doi:10.1177/0261927X14535916); Levine 2019 Duped: truth-default theory and the social science of lying and deception. University of Alabama Press). This provides a well-founded analysis of the growth of deception in societies and the effectiveness of several approaches to reducing deception. Assuming that knowledge is a public good, we use extensive simulation studies to explore (i) how deception impacts the sharing and dissemination of knowledge in societies over time, (ii) how different types of knowledge sharing societies are affected by deception and (iii) what type of policing and regulation is needed to reduce the negative effects of deception in knowledge sharing. Our results indicate that cooperation in knowledge sharing can be re-established in systems by introducing institutions that investigate and regulate both defection and deception using a decentralized case-by-case strategy. This provides evidence for the adoption of methods for reducing the use of deception in the world around us in order to avoid a Tragedy of the Digital Commons (Greco and Floridi 2004 Ethics Inf. Technol. 6, 73-81. (doi:10.1007/s10676-004-2895-2))
    corecore