1,326 research outputs found

    Credibility-Based Binary Feedback Model for Grid Resource Planning

    Get PDF
    In commercial grids, Grid Service Providers (GSPs) can improve their profitability by maintaining the lowest possible amount of resources to meet client demand. Their goal is to maximize profits by optimizing resource planning. In order to achieve this goal, they require an estimate of the demand for their service, but collecting demand data is costly and difficult. In this paper we develop an approach to building a proxy for demand, which we call a value profile. To construct a value profile, we use binary feedback from a collection of heterogeneous clients. We show that this can be used as a proxy for a demand function that represents a client’s willingness-to-pay for grid resources. As with all binary feedback systems, clients may require incentives to provide feedback and deterrents to selfish behavior, such as misrepresenting their true preferences to obtain superior services at lower costs. We use credibility mechanisms to detect untruthful feedback and penalize insincere or biased clients. Finally, we use game theory to study how cooperation can emerge in this community of clients and GSPs

    Reputation in multi agent systems and the incentives to provide feedback

    Get PDF
    The emergence of the Internet leads to a vast increase in the number of interactions between parties that are completely alien to each other. In general, such transactions are likely to be subject to fraud and cheating. If such systems use computerized rational agents to negotiate and execute transactions, mechanisms that lead to favorable outcomes for all parties instead of giving rise to defective behavior are necessary to make the system work: trust and reputation mechanisms. This paper examines different incentive mechanisms helping these trust and reputation mechanisms in eliciting users to report own experiences honestly. --Trust,Reputation

    Reputation Agent: Prompting Fair Reviews in Gig Markets

    Full text link
    Our study presents a new tool, Reputation Agent, to promote fairer reviews from requesters (employers or customers) on gig markets. Unfair reviews, created when requesters consider factors outside of a worker's control, are known to plague gig workers and can result in lost job opportunities and even termination from the marketplace. Our tool leverages machine learning to implement an intelligent interface that: (1) uses deep learning to automatically detect when an individual has included unfair factors into her review (factors outside the worker's control per the policies of the market); and (2) prompts the individual to reconsider her review if she has incorporated unfair factors. To study the effectiveness of Reputation Agent, we conducted a controlled experiment over different gig markets. Our experiment illustrates that across markets, Reputation Agent, in contrast with traditional approaches, motivates requesters to review gig workers' performance more fairly. We discuss how tools that bring more transparency to employers about the policies of a gig market can help build empathy thus resulting in reasoned discussions around potential injustices towards workers generated by these interfaces. Our vision is that with tools that promote truth and transparency we can bring fairer treatment to gig workers.Comment: 12 pages, 5 figures, The Web Conference 2020, ACM WWW 202

    Trust-Based Mechanisms for Robust and Efficient Task Allocation in the Presence of Execution Uncertainty

    Get PDF
    Vickrey-Clarke-Groves (VCG) mechanisms are often used to allocate tasks to selfish and rational agents. VCG mechanisms are incentive-compatible, direct mechanisms that are efficient (i.e. maximise social utility) and individually rational (i.e. agents prefer to join rather than opt out). However, an important assumption of these mechanisms is that the agents will always successfully complete their allocated tasks. Clearly, this assumption is unrealistic in many real-world applications where agents can, and often do, fail in their endeavours. Moreover, whether an agent is deemed to have failed may be perceived differently by different agents. Such subjective perceptions about an agent’s probability of succeeding at a given task are often captured and reasoned about using the notion of trust. Given this background, in this paper, we investigate the design of novel mechanisms that take into account the trust between agents when allocating tasks. Specifically, we develop a new class of mechanisms, called trust-based mechanisms, that can take into account multiple subjective measures of the probability of an agent succeeding at a given task and produce allocations that maximise social utility, whilst ensuring that no agent obtains a negative utility. We then show that such mechanisms pose a challenging new combinatorial optimisation problem (that is NP-complete), devise a novel representation for solving the problem, and develop an effective integer programming solution (that can solve instances with about 2×105 possible allocations in 40 seconds).

    Competition in online comparison shopping services

    Get PDF

    The mechanics of trust: a framework for research and design

    Get PDF
    With an increasing number of technologies supporting transactions over distance and replacing traditional forms of interaction, designing for trust in mediated interactions has become a key concern for researchers in human computer interaction (HCI). While much of this research focuses on increasing users’ trust, we present a framework that shifts the perspective towards factors that support trustworthy behavior. In a second step, we analyze how the presence of these factors can be signalled. We argue that it is essential to take a systemic perspective for enabling well-placed trust and trustworthy behavior in the long term. For our analysis we draw on relevant research from sociology, economics, and psychology, as well as HCI. We identify contextual properties (motivation based on temporal, social, and institutional embeddedness) and the actor's intrinsic properties (ability, and motivation based on internalized norms and benevolence) that form the basis of trustworthy behavior. Our analysis provides a frame of reference for the design of studies on trust in technology-mediated interactions, as well as a guide for identifying trust requirements in design processes. We demonstrate the application of the framework in three scenarios: call centre interactions, B2C e-commerce, and voice-enabled on-line gaming
    corecore