2 research outputs found

    Psychology-Inspired Trust Restoration Framework in Distributed Multiagent Systems

    No full text
    Trust violation during cooperation of autonomous agents in multiagent systems is usually unavoidable and can arise due to a wide number of reasons. From a psychological point of view, the violation of an agent’s trust is a result of one agent (which is a transgressor) expressing a very low weight on the welfare of another agent (which is a victim) by inflicting a high cost for a very small benefit. In order for the victim to make an effective decision about whether to cooperate or punish for the next interaction, a psychological variable called welfare tradeoff ratio (WTR) can be used to upregulate the transgressor’s disposition so that the number of exploitive behaviors that are likely to happen in the future will be decreased. In this paper, we propose computational models of metrics based on the welfare tradeoff ratio along with the way by which multiple metrics can be integrated to provide the final result. Additionally, a number of experiments based on social network analysis are conducted to evaluate the performance of the proposed framework and the results show that by implementing WTR the simulated network is able to deal with different levels of trust violation effectively

    Psychology-Inspired Trust Restoration Framework in Distributed Multiagent Systems

    No full text
    corecore