3 research outputs found

    Voting Systems with Trust Mechanisms in Cyberspace: Vulnerabilities and Defenses

    Get PDF
    With the popularity of voting systems in cyberspace, there is growing evidence that current voting systems can be manipulated by fake votes. This problem has attracted many researchers working on guarding voting systems in two areas: relieving the effect of dishonest votes by evaluating the trust of voters, and limiting the resources that can be used by attackers, such as the number of voters and the number of votes. In this paper, we argue that powering voting systems with trust and limiting attack resources are not enough. We present a novel attack named as Reputation Trap (RepTrap). Our case study and experiments show that this new attack needs much less resources to manipulate the voting systems and has a much higher success rate compared with existing attacks. We further identify the reasons behind this attack and propose two defense schemes accordingly. In the first scheme, we hide correlation knowledge from attackers to reduce their chance to affect the honest voters. In the second scheme, we introduce robustness-of-evidence, a new metric, in trust calculation to reduce their effect on honest voters. We conduct extensive experiments to validate our approach. The results show that our defense schemes not only can reduce the success rate of attacks but also significantly increase the amount of resources an adversary needs to launch a successful attack

    Trust and Reputation in Multi-Agent Systems

    Get PDF
    Multi-Agent systems (MAS) are artificial societies populated with distributed autonomous agents that are intelligent and rational. These self-independent agents are capable of independent decision making towards their predefined goals. These goals might be common between agents or unique for an agent. Agents may cooperate with one another to facilitate their progresses. One of the fundamental challenges in such settings is that agents do not have a full knowledge over the environment and regarding their decision making processes, they might need to request other agents for a piece of information or service. The crucial issues are then how to rely on the information provided by other agents, how to consider the collected data, and how to select appropriate agents to ask for the required information. There are some proposals addressing how an agent can rely on other agents and how an agent can compute the overall opinion about a particular agent. In this context, the trust value reflects the extent to which agents can rely on other agents and the reputation value represents public opinion about a particular agent. Existing approaches for reliable information propagation fail to capture the dynamic relationships between agents and their influence on further decision making process. Therefore, these models fail to adapt agents to frequent environment changes. In general, a well-founded trust and reputation system that prevents malicious acts that are emerged by selfish agents is required for multi-agent systems. We propose a trust mechanism that measures and analyzes the reliability of agents cooperating with one another. This mechanism concentrates on the key attributes of the related agents and their relationships. We also measure and analyze the public reputation of agents in large-scale environments utilizing a sound reputation mechanism. In this mechanism, we aim at maintaining a public reputation assessment in which the public actions of agents are accurately under analysis. On top of the theoretical analysis, we experimentally validate our trust and reputation approaches through different simulations. Our preliminary results show that our approach outperforms current frameworks in providing accurate credibility measurements and maintaining accurate trust and reputation mechanisms

    Promoting Honesty in Electronic Marketplaces: Combining Trust Modeling and Incentive Mechanism Design

    Get PDF
    This thesis work is in the area of modeling trust in multi-agent systems, systems of software agents designed to act on behalf of users (buyers and sellers), in applications such as e-commerce. The focus is on developing an approach for buyers to model the trustworthiness of sellers in order to make effective decisions about which sellers to select for business. One challenge is the problem of unfair ratings, which arises when modeling the trust of sellers relies on ratings provided by other buyers (called advisors). Existing approaches for coping with this problem fail in scenarios where the majority of advisors are dishonest, buyers do not have much personal experience with sellers, advisors try to flood the trust modeling system with unfair ratings, and sellers vary their behavior widely. We propose a novel personalized approach for effectively modeling trustworthiness of advisors, allowing a buyer to 1) model the private reputation of an advisor based on their ratings for commonly rated sellers 2) model the public reputation of the advisor based on all ratings for the sellers ever rated by that agent 3) flexibly weight the private and public reputation into one combined measure of the trustworthiness of the advisor. Our approach tracks ratings provided according to their time windows and limits the ratings accepted, in order to cope with advisors flooding the system and to deal with changes in agents' behavior. Experimental evidence demonstrates that our model outperforms other models in detecting dishonest advisors and is able to assist buyers to gain the largest profit when doing business with sellers. Equipped with this richer method for modeling trustworthiness of advisors, we then embed this reasoning into a novel trust-based incentive mechanism to encourage agents to be honest. In this mechanism, buyers select the most trustworthy advisors as their neighbors from which they can ask advice about sellers, forming a social network. In contrast with other researchers, we also have sellers model the reputation of buyers. Sellers will offer better rewards to satisfy buyers that are well respected in the social network, in order to build their own reputation. We provide precise formulae used by sellers when reasoning about immediate and future profit to determine their bidding behavior and the rewards to buyers, and emphasize the importance for buyers to adopt a strategy to limit the number of sellers that are considered for each good to be purchased. We theoretically prove that our mechanism promotes honesty from buyers in reporting seller ratings, and honesty from sellers in delivering products as promised. We also provide a series of experimental results in a simulated dynamic environment where agents may be arriving and departing. This provides a stronger defense of the mechanism as one that is robust to important conditions in the marketplace. Our experiments clearly show the gains in profit enjoyed by both honest sellers and honest buyers when our mechanism is introduced and our proposed strategies are followed. In general, our research will serve to promote honesty amongst buyers and sellers in e-marketplaces. Our particular proposal of allowing sellers to model buyers opens a new direction in trust modeling research. The novel direction of designing an incentive mechanism based on trust modeling and using this mechanism to further help trust modeling by diminishing the problem of unfair ratings will hope to bridge researchers in the areas of trust modeling and mechanism design
    corecore