34 research outputs found

    Advertisement-financed credit ratings

    Get PDF
    This paper investigates the incentives of a credit rating agency (CRA) to generate accurate ratings under an advertisement-based business model. To this end, we study a two-period endogenous reputation model in which a CRA can increase the precision of its ratings by exerting effort. The CRA receives a revenue not from rating fees, as is standard in the literature, but through online advertising. We show that the advertisement-based business model provides sufficient incentives for the CRA to improve the precision of signals at intermediate levels of reputation. Furthermore, we identify conditions under which truthful reporting is incentive compatible. © 2021, The Author(s)

    Using Identity Premium for Honesty Enforcement and Whitewashing Prevention

    Get PDF
    One fundamental issue with existing reputation systems, particularly those implemented in open and decentralized environments, is whitewashing attacks by opportunistic participants. If identities are cheap, it is beneficial for a rational provider to simply defect when selling services to its clients, leave the system to avoid punishment and then rejoin with a new identity. Current work usually assumes the existence of an effective identity management scheme to avoid the problem, without proposing concrete solutions to directly prevent this unwanted behavior. This article presents and analyzes an incentive mechanism to effectively motivate honesty of rationally opportunistic providers in the aforementioned scenario, by eliminating incentives of providers to change their identities. The main idea is to give each provider an identity premium, with which the provider may sell services at higher prices depending on the duration of its presence in the system. Our price-based incentive mechanism, implemented with the use of a reputation-based provider selection protocol and a reverse auction scheme, is shown to significantly reduce the impact of malicious and strategic ratings, while still allowing explicit competition among the providers. It is proven that if the temporary cheating gain by a provider is bounded and small and given a trust model with a reasonable low error bound in identifying malicious ratings, our approach can effectively eliminate irrationally malicious providers and enforce honest behavior of rationally opportunistic ones, even when cheap identities are available. We suggest an identity premium function that helps such honesty to be sustained given a certain cost of identities and analyze incentives of participants in accepting the proposed premium. Related implementation issues in different application scenarios are also discussed

    Promoting Honesty in Electronic Marketplaces: Combining Trust Modeling and Incentive Mechanism Design

    Get PDF
    This thesis work is in the area of modeling trust in multi-agent systems, systems of software agents designed to act on behalf of users (buyers and sellers), in applications such as e-commerce. The focus is on developing an approach for buyers to model the trustworthiness of sellers in order to make effective decisions about which sellers to select for business. One challenge is the problem of unfair ratings, which arises when modeling the trust of sellers relies on ratings provided by other buyers (called advisors). Existing approaches for coping with this problem fail in scenarios where the majority of advisors are dishonest, buyers do not have much personal experience with sellers, advisors try to flood the trust modeling system with unfair ratings, and sellers vary their behavior widely. We propose a novel personalized approach for effectively modeling trustworthiness of advisors, allowing a buyer to 1) model the private reputation of an advisor based on their ratings for commonly rated sellers 2) model the public reputation of the advisor based on all ratings for the sellers ever rated by that agent 3) flexibly weight the private and public reputation into one combined measure of the trustworthiness of the advisor. Our approach tracks ratings provided according to their time windows and limits the ratings accepted, in order to cope with advisors flooding the system and to deal with changes in agents' behavior. Experimental evidence demonstrates that our model outperforms other models in detecting dishonest advisors and is able to assist buyers to gain the largest profit when doing business with sellers. Equipped with this richer method for modeling trustworthiness of advisors, we then embed this reasoning into a novel trust-based incentive mechanism to encourage agents to be honest. In this mechanism, buyers select the most trustworthy advisors as their neighbors from which they can ask advice about sellers, forming a social network. In contrast with other researchers, we also have sellers model the reputation of buyers. Sellers will offer better rewards to satisfy buyers that are well respected in the social network, in order to build their own reputation. We provide precise formulae used by sellers when reasoning about immediate and future profit to determine their bidding behavior and the rewards to buyers, and emphasize the importance for buyers to adopt a strategy to limit the number of sellers that are considered for each good to be purchased. We theoretically prove that our mechanism promotes honesty from buyers in reporting seller ratings, and honesty from sellers in delivering products as promised. We also provide a series of experimental results in a simulated dynamic environment where agents may be arriving and departing. This provides a stronger defense of the mechanism as one that is robust to important conditions in the marketplace. Our experiments clearly show the gains in profit enjoyed by both honest sellers and honest buyers when our mechanism is introduced and our proposed strategies are followed. In general, our research will serve to promote honesty amongst buyers and sellers in e-marketplaces. Our particular proposal of allowing sellers to model buyers opens a new direction in trust modeling research. The novel direction of designing an incentive mechanism based on trust modeling and using this mechanism to further help trust modeling by diminishing the problem of unfair ratings will hope to bridge researchers in the areas of trust modeling and mechanism design

    Mechanisms for making crowds truthful

    Get PDF
    Abstract We consider schemes for obtaining truthful reports on a common but hidden signal from large groups of rational, self-interested agents. One example are online feedback mechanisms, where users provide observations about the quality of a product or service so that other users can have an accurate idea of what quality they can expect. However, (i) providing such feedback is costly, and (ii) there are many motivations for providing incorrect feedback. Both problems can be addressed by reward schemes which (i) cover the cost of obtaining and reporting feedback, and (ii) maximize the expected reward of a rational agent who reports truthfully. We address the design of such incentive-compatible rewards for feedback generated in environments with pure adverse selection. Here, the correlation between the true knowledge of an agent and her beliefs regarding the likelihoods of reports of other agents can be exploited to make honest reporting a Nash equilibrium. In this paper we extend existing methods for designing incentive-compatible rewards by also considering collusion. We analyze different scenarios, where, for example, some or all of the agents collude. For each scenario we investigate whether a collusion-resistant, incentive-compatible reward scheme exists, and use automated mechanism design to specify an algorithm for deriving an efficient reward mechanism

    Toward Secure Trust and Reputation Systems for Electronic Marketplaces

    Get PDF
    In electronic marketplaces, buying and selling agents may be used to represent buyers and sellers respectively. When these marketplaces are large, repeated transactions between traders may be rare. This makes it difficult for buying agents to judge the reliability of selling agents, discouraging participation in the market. A variety of trust and reputation systems have been proposed to help traders to find trustworthy partners. Unfortunately, as our investigations reveal, there are a number of common vulnerabilities present in such models---security problems that may be exploited by `attackers' to cheat without detection/repercussions. Inspired by these findings, we set out to develop a model of trust with more robust security properties than existing proposals. Our Trunits model represents a fundamental re-conception of the notion of trust. Instead of viewing trust as a measure of predictability, Trunits considers trust to be a quality that one possesses. Trust is represented using abstract trust units, or `trunits', in much the same way that money represents quantities of value. Trunits flow in the course of transactions (again, similar to money); a trader's trunit balance determines if he is trustworthy for a given transaction. Faithful execution of a transaction results in a larger trunit balance, permitting the trader to engage in more transactions in the future---a built-in economic incentive for honesty. We present two mechanisms (sets of rules that govern the operation of the marketplace) based on this model: Basic Trunits, and an extension known as Commodity Trunits, in which trunits may be bought and sold. Seeking to precisely characterize the protection provided to market participants by our models, we develop a framework for security analysis of trust and reputation systems. Inspired by work in cryptography, our framework allows security guarantees to be developed for trust/reputation models--provable claims of the degree of protection provided, and the conditions under which such protection holds. We focus in particular on characterizing buyer security: the properties that must hold for buyers to feel secure from cheating sellers. Beyond developing security guarantees, this framework is an important research tool, helping to highlight limitations and deficiencies in models so that they may be targeted for future investigation. Application of this framework to Basic Trunits and Commodity Trunits reveals that both are able to deliver provable security to buyers

    Evolutionary Mechanism Design

    Get PDF
    The advent of large-scale distributed systems poses unique engineering challenges. In open systems such as the internet it is not possible to prescribe the behaviour of all of the components of the system in advance. Rather, we attempt to design infrastructure, such as network protocols, in such a way that the overall system is robust despite the fact that numerous arbitrary, non-certified, third-party components can connect to our system. Economists have long understood this issue, since it is analogous to the design of the rules governing auctions and other marketplaces, in which we attempt to achieve sociallydesirable outcomes despite the impossibility of prescribing the exact behaviour of the market participants, who may attempt to subvert the market for their own personal gain. This field is known as 'mechanism design': the science of designing rules of a game to achieve a specific outcome, even though each participant may be self-interested. Although it originated in economics, mechanism design has become an important foundation of multi-agent systems (MAS) research. In many scenarios mechanism design and auction theory yield clear-cut results; however, there are many situations in which the underlying assumptions of the theory are violated due to the messiness of the real-world. In this thesis I introduce an evolutionary methodology for mechanism design, which is able to incorporate arbitrary design objectives and domain assumptions, and I validate the methodology using empirical techniques

    THE EFFECTIVENESS OF SELLER CREDIBILITY SYSTEMS IN THE ONLINE AUCTION MARKET: MODELING THE SELLER'S POINT OF VIEW

    Get PDF
    The Internet has turned out to be an appealing place for doing business, with its unprecedented ability to bring together a large number of buyers and sellers, cover a wide scale of market and automate transaction processes, etc. However, this powerful technology of information transformation brings a greater trust problem than corresponding transactions in brick-and-mortar markets, because of the lack of information on product quality and seller honesty. Product information may be selectively disclosed, which increases the chance of fraud and dishonest behaviors. This research focuses on online feedback systems. Analytical models are developed to assess the impact of such feedback systems. Feedback systems, by themselves, are shown to work under certain conditions even in an ideal environment. Influences from incentives for providing feedback, shilling and ID changing are comprehensively discussed. If consumers do value trust, one should expect the more trustworthy sellers to generate higher prices for their products than the less trustworthy sellers. A higher price can offer incentives for sellers to be trustworthy. Following the analytical model, empirical tests of online feedback system are conducted
    corecore