92,377 research outputs found

    Analyzing communities vs. single agent-based Web services: Trust perspectives

    Get PDF
    Gathering functionally similar agent-based Web services into communities has been proposed and promoted on many occasions. In this paper, we compare the performance of these communities with self-managed, single agent-based Web services from trust perspective. To this end, we deploy a reputation model that ranks communities and Web services with respect to different reputation parameters. By relating the parameters, we extend our discussion to analyze the beneficial cases and incentives for a single Web service to join a community even if this joining could negatively impact other parameters. Besides theoretical discussions of this analysis, we discuss the system implementation along with simulations that depict diverse parameters and system performance. Ā© 2010 IEEE

    Incentive-based reputation of WEB services communities

    Get PDF
    There have been always motivations to introduce clustering of entities with similar functionality into groups of redundant services or agents. Communities of Web services are composed by aggregating a number of functionally identical Web services. Many communities with the same type of service can be formed and all aim to increase their reputation level in order to obtain more requests. The problem, however, is that there are no incentives for these communities to act truthfully and not providing fake feedback in support of themselves or against others. In this thesis we propose an incentive and game-theoretic-based mechanism dealing with reputation assessment for communities of Web services. The proposed reputation mechanism is based on after-service feedback provided by the users to a logging system. Given that the communities are free to decide about their actions, the proposed method defines the evaluation metrics involved in reputation assessment of the communities and supervises the logging system by means of controller agent in order to verify the validity and soundness of the feedback. We also define incentives so that the best game-theoretical strategy for communities is to act truthfully. Theoretical analysis of the framework along with empirical results are provided

    Reputation Agent: Prompting Fair Reviews in Gig Markets

    Full text link
    Our study presents a new tool, Reputation Agent, to promote fairer reviews from requesters (employers or customers) on gig markets. Unfair reviews, created when requesters consider factors outside of a worker's control, are known to plague gig workers and can result in lost job opportunities and even termination from the marketplace. Our tool leverages machine learning to implement an intelligent interface that: (1) uses deep learning to automatically detect when an individual has included unfair factors into her review (factors outside the worker's control per the policies of the market); and (2) prompts the individual to reconsider her review if she has incorporated unfair factors. To study the effectiveness of Reputation Agent, we conducted a controlled experiment over different gig markets. Our experiment illustrates that across markets, Reputation Agent, in contrast with traditional approaches, motivates requesters to review gig workers' performance more fairly. We discuss how tools that bring more transparency to employers about the policies of a gig market can help build empathy thus resulting in reasoned discussions around potential injustices towards workers generated by these interfaces. Our vision is that with tools that promote truth and transparency we can bring fairer treatment to gig workers.Comment: 12 pages, 5 figures, The Web Conference 2020, ACM WWW 202

    Simulating the conflict between reputation and profitability for online rating portals

    Get PDF
    We simulate the process of possible interactions between a set of competitive services and a set of portals that provide online rating for these services. We argue that to have a profitable business, these portals are forced to have subscribed services that are rated by the portals. To satisfy the subscribing services, we make the assumption that the portals improve the rating of a given service by one unit per transaction that involves payment. In this study we follow the 'what-if' methodology, analysing strategies that a service may choose from to select the best portal for it to subscribe to, and strategies for a portal to accept the subscription such that its reputation loss, in terms of the integrity of its ratings, is minimised. We observe that the behaviour of the simulated agents in accordance to our model is quite natural from the real-would perspective. One conclusion from the simulations is that under reasonable conditions, if most of the services and rating portals in a given industry do not accept a subscription policy similar to the one indicated above, they will lose, respectively, their ratings and reputations, and, moreover the rating portals will have problems in making a profit. Our prediction is that the modern portal-rating based economy sector will eventually evolve into a subscription process similar to the one we suggest in this study, as an alternative to a business model based purely on advertising

    Trust Strategies for the Semantic Web

    Get PDF
    Everyone agrees on the importance of enabling trust on the SemanticWebto ensure more efficient agent interaction. Current research on trust seems to focus on developing computational models, semantic representations, inference techniques, etc. However, little attention has been given to the plausible trust strategies or tactics that an agent can follow when interacting with other agents on the Semantic Web. In this paper we identify five most common strategies of trust and discuss their envisaged costs and benefits. The aim is to provide some guidelines to help system developers appreciate the risks and gains involved with each trust strategy

    A Formal Framework for Modeling Trust and Reputation in Collective Adaptive Systems

    Get PDF
    Trust and reputation models for distributed, collaborative systems have been studied and applied in several domains, in order to stimulate cooperation while preventing selfish and malicious behaviors. Nonetheless, such models have received less attention in the process of specifying and analyzing formally the functionalities of the systems mentioned above. The objective of this paper is to define a process algebraic framework for the modeling of systems that use (i) trust and reputation to govern the interactions among nodes, and (ii) communication models characterized by a high level of adaptiveness and flexibility. Hence, we propose a formalism for verifying, through model checking techniques, the robustness of these systems with respect to the typical attacks conducted against webs of trust.Comment: In Proceedings FORECAST 2016, arXiv:1607.0200

    Trust and Risk Relationship Analysis on a Workflow Basis: A Use Case

    Get PDF
    Trust and risk are often seen in proportion to each other; as such, high trust may induce low risk and vice versa. However, recent research argues that trust and risk relationship is implicit rather than proportional. Considering that trust and risk are implicit, this paper proposes for the first time a novel approach to view trust and risk on a basis of a W3C PROV provenance data model applied in a healthcare domain. We argue that high trust in healthcare domain can be placed in data despite of its high risk, and low trust data can have low risk depending on data quality attributes and its provenance. This is demonstrated by our trust and risk models applied to the BII case study data. The proposed theoretical approach first calculates risk values at each workflow step considering PROV concepts and second, aggregates the final risk score for the whole provenance chain. Different from risk model, trust of a workflow is derived by applying DS/AHP method. The results prove our assumption that trust and risk relationship is implicit
    • ā€¦
    corecore