117 research outputs found

    From Manifesta to Krypta: The Relevance of Categories for Trusting Others

    No full text
    In this paper we consider the special abilities needed by agents for assessing trust based on inference and reasoning. We analyze the case in which it is possible to infer trust towards unknown counterparts by reasoning on abstract classes or categories of agents shaped in a concrete application domain. We present a scenario of interacting agents providing a computational model implementing different strategies to assess trust. Assuming a medical domain, categories, including both competencies and dispositions of possible trustees, are exploited to infer trust towards possibly unknown counterparts. The proposed approach for the cognitive assessment of trust relies on agents' abilities to analyze heterogeneous information sources along different dimensions. Trust is inferred based on specific observable properties (Manifesta), namely explicitly readable signals indicating internal features (Krypta) regulating agents' behavior and effectiveness on specific tasks. Simulative experiments evaluate the performance of trusting agents adopting different strategies to delegate tasks to possibly unknown trustees, while experimental results show the relevance of this kind of cognitive ability in the case of open Multi Agent Systems

    SCFM: Social and crowdsourcing factorization machines for recommendation

    Get PDF
    With the rapid development of social networks, the exponential growth of social information has attracted much attention. Social information has great value in recommender systems to alleviate the sparsity and cold start problem. On the other hand, the crowd computing empowers recommender systems by utilizing human wisdom. Internal user reviews can be exploited as the wisdom of the crowd to contribute information. In this paper, we propose social and crowdsourcing factorization machines, called SCFM. Our approach fuses social and crowd computing into the factorization machine model. For social computing, we calculate the influence value between users by taking users’ social information and user similarity into account. For crowd computing, we apply LDA (Latent Dirichlet Allocation) on people review to obtain sets of underlying topic probabilities. Furthermore, we impose two important constraints called social regularization and domain inner regularization. The experimental results show that our approach outperforms other state-of-the-art methods.This project is supported by the National Natural Science Foundation of China (Nos. 61672340, 61472240, 61572268)

    Simultaneous Inference of User Representations and Trust

    Full text link
    Inferring trust relations between social media users is critical for a number of applications wherein users seek credible information. The fact that available trust relations are scarce and skewed makes trust prediction a challenging task. To the best of our knowledge, this is the first work on exploring representation learning for trust prediction. We propose an approach that uses only a small amount of binary user-user trust relations to simultaneously learn user embeddings and a model to predict trust between user pairs. We empirically demonstrate that for trust prediction, our approach outperforms classifier-based approaches which use state-of-the-art representation learning methods like DeepWalk and LINE as features. We also conduct experiments which use embeddings pre-trained with DeepWalk and LINE each as an input to our model, resulting in further performance improvement. Experiments with a dataset of ∌\sim356K user pairs show that the proposed method can obtain an high F-score of 92.65%.Comment: To appear in the proceedings of ASONAM'17. Please cite that versio

    Civility and trust in social media

    Get PDF
    Social media have been credited with the potential of reinvigorating trust by offering new opportunities for social and political participation. This view has been recently challenged by the rising phenomenon of online incivility, which has made the environment of social networking sites hostile to many users. We conduct a novel experiment in a Facebook setting to study how the effect of social media on trust varies depending on the civility or incivility of online interaction. We find that participants exposed to civil Facebook interaction are significantly more trusting. In contrast, when the use of Facebook is accompanied by the experience of online incivility, no significant changes occur in users’ behavior. These results are robust to alternative configurations of the treatments

    Trust and Financial Trades: Lessons from an Investment Game Where Reciprocators Can Hide Behind Probabilities

    Get PDF
    In this paper we show that if a very small, exogenously given probability of terminating the exchange is introduced in an elementary investment game, reciprocators play more often the defection strategy. Everything happens as if they "hide behind probabilities" in order to break the trust relationship. Investors do no not seem able to internalize the reciprocators' change in behavior. This could explain why trades involving an exogenous risk of value destruction, such as financial transactions, provide an unfavorable environment for trust-buildingExperimental Economics; Financial Transactions; Investment Game; Objective Risk; Trust

    A strategy for trust propagation along the more trusted paths

    Get PDF
    The main goal of social networks are sharing and exchanging information among users. With the rapid growth of social networks on the Web, the most of interactions are conducted among unknown individuals. On the other hand, with increasing the biased behaviors in online communities, ability to assess the level of trustworthiness of a person before interacting with him has an important influence on users' decisions. Trust inference is a method used for this purpose. This paper studies propagating trust values along trust relationships in order to estimate the reliability of an anonymous person from the point of view of the user who intends to trust him/her. It describes a new approach for predicting trust values in social networks. The proposed method selects the most reliable trust paths from a source node to a destination node. In order to select the optimal paths, a new relation for calculating trustable coefficient based on previous performance of users in the social network is proposed. In ciao dataset there is a column called helpfulness. Helpfulness values represent previous performance of users in the social network. Advantages of this algorithm is its simplicity in trust calculation, using a new entity in dataset and its improvement in accuracy. The results of the experiments on Ciao dataset indicate that accuracy of the proposed method in evaluating trust values is higher than well-known methods in this area including TidalTrust, MoleTrust methods

    Mining Heterogeneous Influence and Indirect Trust for Recommendation

    Full text link
    Relationships between users in social networks have been widely used to improve recommender systems. However, actual social relationships are always sparse, which sometimes bring great harm to the performance of recommender systems. In fact, a user may interact with others that he/she does not connect directly, and thus has an impact on these users. To mine abundant information for social recommendation and alleviate the problem of data sparsity, we study the process of trust propagation and propose a novel recommendation algorithm that incorporates multiple information sources into matrix factorization. We first explore heterogeneous influence strength for each pair of linked users and mine indirect trust between users by using trust propagation and aggregation strategy in social networks. Then, explicit and implicit information of user trust and ratings are incorporated into matrix factorization, and the influence of indirect trust is considered in the recommendation process. Experimental results show that the proposed model achieves better performance than some state-of-The-Art recommendation models in terms of accuracy and relieves the cold-start problem

    Can a Bonus Overcome Moral Hazard? An Experiment on Voluntary Payments, Competition, and Reputation in Markets for Expert Services

    Get PDF
    Interactions between players with private information and opposed interests are often prone to bad advice and inefficient outcomes, e.g. markets for financial or health care services. In a deception game we investigate experimentally which factors could improve advice quality. Besides advisor competition and identifiability we add the possibility for clients to make a voluntary payment, a bonus, after observing advice quality. We observe a positive effect on the rate of truthful advice when the bonus creates multiple opportunities to reciprocate, that is, when the bonus is combined with identifiability (leading to several client-advisor interactions over the course of the game) or competition (allowing one advisor to have several clients who may reciprocate within one period)
    • 

    corecore