259,267 research outputs found

    CO-DESIGN OF DYNAMIC REAL-TIME SCHEDULING AND COOPERATIVE CONTROL FOR HUMAN-AGENT COLLABORATION SYSTEMS BASED ON MUTUAL TRUST

    Get PDF
    Mutual trust is a key factor in human-human collaboration. Inspired by this social interaction, we analyze human-agent mutual trust in the collaboration of one human and (semi)autonomous multi-agent systems. In the thesis, we derive time-series human-agent mutual trust models based on results from human factors engineering. To avoid both over- trust and under-trust, we set up dynamic timing models for the multi-agent scheduling problem and develop necessary and sufficient conditions to test the schedulability of the human multi-agent collaborative task. Furthermore, we extend the collaboration between one human and multiple agents into the collaboration between multi-human network and swarm-based agents network. To measure the collaboration between these two networks, we propose a novel measurement, called fitness. By fitness, we can simplify multi-human and swarms collaboration into one-human and swarms collaboration. Cooperative control is incorporated into the swarm systems to enable several large-scale agent teams to simultaneously reach navigational goals and avoid collisions. Our simulation results show that the proposed algorithm can be applied to human- agent collaboration systems and guarantee effective real-time scheduling of collaboration systems while ensuring a proper level of human-agent mutual trust

    How Physicality Enables Trust: A New Era of Trust-Centered Cyberphysical Systems

    Full text link
    Multi-agent cyberphysical systems enable new capabilities in efficiency, resilience, and security. The unique characteristics of these systems prompt a reevaluation of their security concepts, including their vulnerabilities, and mechanisms to mitigate these vulnerabilities. This survey paper examines how advancement in wireless networking, coupled with the sensing and computing in cyberphysical systems, can foster novel security capabilities. This study delves into three main themes related to securing multi-agent cyberphysical systems. First, we discuss the threats that are particularly relevant to multi-agent cyberphysical systems given the potential lack of trust between agents. Second, we present prospects for sensing, contextual awareness, and authentication, enabling the inference and measurement of ``inter-agent trust" for these systems. Third, we elaborate on the application of quantifiable trust notions to enable ``resilient coordination," where ``resilient" signifies sustained functionality amid attacks on multiagent cyberphysical systems. We refer to the capability of cyberphysical systems to self-organize, and coordinate to achieve a task as autonomy. This survey unveils the cyberphysical character of future interconnected systems as a pivotal catalyst for realizing robust, trust-centered autonomy in tomorrow's world

    A computation trust model with trust network in multi-agent systems

    Full text link
    Trust is a fundamental issue in multi-agent systems, especially when they are applied in e-commence. The computational models of trust play an important role in determining who and how to interact in open and dynamic environments. To this end, a computation trust model is proposed in which the confidence information based on direct prior interactions with the target agent and the reputation information from trust network are used. In this way, agents can autonomously deal with deception and identify trustworthy parties in multi-agent systems. The ontological property of trust is also considered in the model. A case study is provided to show the effectiveness of the proposed model. <br /

    Asymptotically idempotent aggregation operators for trust management in multi-agent systems

    Get PDF
    The study of trust management in multi-agent system, especially distributed, has grown over the last years. Trust is a complex subject that has no general consensus in literature, but has emerged the importance of reasoning about it computationally. Reputation systems takes into consideration the history of an entityā€™s actions/behavior in order to compute trust, collecting and aggregating ratings from members in a community. In this scenario the aggregation problem becomes fundamental, in particular depending on the environment. In this paper we describe a technique based on a class of asymptotically idempotent aggregation operators, suitable particulary for distributed anonymous environments

    Trust and Normative Control in Multi-Agent Systems

    Get PDF
    Despite relevant insights from socio-economics, little research in multi-agentsystems has addressed the interconnections between trust and normative notionssuch as contracts and sanctions. Focusing our attention on scenarios of betrayal,in this paper we combine the use of trust and sanctions in a negotiation process.We describe a scenario of dyadic relationships between truster agents,which make use of trust and/or sanctions, and trustee agents, characterized bytheir ability and integrity, which may influence their attitude toward betrayal.Both agent behavior models are inspired in socio-economics literature. Throughsimulation, we show the virtues and shortcomings of using trust, sanctions, anda combination of both in processes of selection of partners

    Trust and reputation in open multi-agent systems

    No full text
    Trust and reputation are central to effective interactions in open multi-agent systems (MAS) in which agents, that are owned by a variety of stakeholders, continuously enter and leave the system. This openness means existing trust and reputation models cannot readily be used since their performance suffers when there are various (unforseen) changes in the environment. To this end, this thesis develops and evaluates FIRE, a trust and reputation model that enables autonomous agents in open MAS to evaluate the trustworthiness of their peers and to select good partners for interactions. FIRE integrates four sources of trust information under the same framework in order to provide a comprehensive assessment of an agentā€™s likely performance in open systems. Specifically, FIRE incorporates interaction trust, role-based trust, witness reputation, and certified reputation, that models trust resulting from direct experiences, role-based relationships, witness reports, and third-party references, respectively, to provide trust metrics in most circumstances. A novel model of reporter credibility has also been integrated to enable FIRE to effectively deal with inaccurate reports (from witnesses and referees). Finally, adaptive techniques have been introduced, which make use of the information gained from monitoring the environment, to dynamically adjust a number of FIREā€™s parameters according to the actual situation an agent finds itself in. In all cases, a systematic empirical analysis is undertaken to evaluate the effectiveness of FIRE in terms of the agentā€™s performance

    Special issue: Development of service-based and agent-based computing systems

    Get PDF
    This special issue presents the best papers from theworkshops onService-OrientedComputing: Agents, Semantics and Engineering (SOCASE 2010) held in May 2010 in Toronto, Canada and the IEEE 2010 First International Workshop on Service-Oriented Computing and Multi-Agent Systems (SOCMAS 2010) held in July 2010 in Miami, Florida, USA. The goal of the workshops was to present the recent significant developments at the intersections of multi-agent systems, semantic technology, and service-oriented computing, and to promote crossfertilization of techniques. In particular, the workshops attempted to identify techniques from research on multi-agent systems and semantic technology that will have the greatest impact on automating serviceoriented application construction and management, focusing on critical challenges such as service quality assurance, reliability, and adaptability. The areas of service-oriented computing and Semantic Web services offer much of real interest to the multi-agent system community, including similarities in system architectures and provision processes, powerful tools, and the focus on many related issues including quality of service, security, and reliability. In addition, service-oriented computing and Semantic Web services offer various diverse application fields for both the concepts and methodologies of intelligent agent and multi-agent systems. Similarly, techniques developed in the multi-agent systems research community promise to have a strong impact on this fast growing technology. In particular, they enable services to be discovered and enacted across enterprise boundaries. If an organisation bases its success on services provided by others, then it must be able to trust that the services will perform as promised, whenever needed. Researchers in multi-agent systems have investigated such trust mechanisms

    On the Simulation of Global Reputation Systems

    Get PDF
    Reputation systems evolve as a mechanism to build trust in virtual communities. In this paper we evaluate different metrics for computing reputation in multi-agent systems. We present a formal model for describing metrics in reputation systems and show how different well-known global reputation metrics are expressed by it. Based on the model a generic simulation framework for reputation metrics was implemented. We used our simulation framework to compare different global reputation systems to find their strengths and weaknesses. The strength of a metric is measured by its resistance against different threat-models, i.e. different types of hostile agents. Based on our results we propose a new metric for reputation systems.Reputation System, Trust, Formalization, Simulation

    Trust and deception in multi-agent trading systems: a logical viewpoint

    Get PDF
    Trust and deception have been of concern to researchers since the earliest research into multi-agent trading systems (MATS). In an open trading environment, trust can be established by external mechanisms e.g. using secret keys or digital signatures or by internal mechanisms e.g. learning and reasoning from experience. However, in a MATS, where distrust exists among the agents, and deception might be used between agents, how to recognize and remove fraud and deception in MATS becomes a significant issue in order to maintain a trustworthy MATS environment. This paper will propose an architecture for a multi-agent trading system (MATS) and explore how fraud and deception changes the trust required in a multi-agent trading system/environment. This paper will also illustrate several forms of logical reasoning that involve trust and deception in a MATS. The research is of significance in deception recognition and trust sustainability in e-business and e-commerce
    • ā€¦
    corecore