1 research outputs found

    Trust and Reputation in Multi-Agent Systems

    Get PDF
    Multi-Agent systems (MAS) are artificial societies populated with distributed autonomous agents that are intelligent and rational. These self-independent agents are capable of independent decision making towards their predefined goals. These goals might be common between agents or unique for an agent. Agents may cooperate with one another to facilitate their progresses. One of the fundamental challenges in such settings is that agents do not have a full knowledge over the environment and regarding their decision making processes, they might need to request other agents for a piece of information or service. The crucial issues are then how to rely on the information provided by other agents, how to consider the collected data, and how to select appropriate agents to ask for the required information. There are some proposals addressing how an agent can rely on other agents and how an agent can compute the overall opinion about a particular agent. In this context, the trust value reflects the extent to which agents can rely on other agents and the reputation value represents public opinion about a particular agent. Existing approaches for reliable information propagation fail to capture the dynamic relationships between agents and their influence on further decision making process. Therefore, these models fail to adapt agents to frequent environment changes. In general, a well-founded trust and reputation system that prevents malicious acts that are emerged by selfish agents is required for multi-agent systems. We propose a trust mechanism that measures and analyzes the reliability of agents cooperating with one another. This mechanism concentrates on the key attributes of the related agents and their relationships. We also measure and analyze the public reputation of agents in large-scale environments utilizing a sound reputation mechanism. In this mechanism, we aim at maintaining a public reputation assessment in which the public actions of agents are accurately under analysis. On top of the theoretical analysis, we experimentally validate our trust and reputation approaches through different simulations. Our preliminary results show that our approach outperforms current frameworks in providing accurate credibility measurements and maintaining accurate trust and reputation mechanisms
    corecore