123 research outputs found

    Extracting Implicit Social Relation for Social Recommendation Techniques in User Rating Prediction

    Full text link
    Recommendation plays an increasingly important role in our daily lives. Recommender systems automatically suggest items to users that might be interesting for them. Recent studies illustrate that incorporating social trust in Matrix Factorization methods demonstrably improves accuracy of rating prediction. Such approaches mainly use the trust scores explicitly expressed by users. However, it is often challenging to have users provide explicit trust scores of each other. There exist quite a few works, which propose Trust Metrics to compute and predict trust scores between users based on their interactions. In this paper, first we present how social relation can be extracted from users' ratings to items by describing Hellinger distance between users in recommender systems. Then, we propose to incorporate the predicted trust scores into social matrix factorization models. By analyzing social relation extraction from three well-known real-world datasets, which both: trust and recommendation data available, we conclude that using the implicit social relation in social recommendation techniques has almost the same performance compared to the actual trust scores explicitly expressed by users. Hence, we build our method, called Hell-TrustSVD, on top of the state-of-the-art social recommendation technique to incorporate both the extracted implicit social relations and ratings given by users on the prediction of items for an active user. To the best of our knowledge, this is the first work to extend TrustSVD with extracted social trust information. The experimental results support the idea of employing implicit trust into matrix factorization whenever explicit trust is not available, can perform much better than the state-of-the-art approaches in user rating prediction

    Top-N Recommendation Based on Mutual Trust and Influence

    Get PDF
    To improve recommendation quality, the existing trust-based recommendation methods often directly use the binary trust relationship of social networks, and rarely consider the difference and potential influence of trust strength among users. To make up for the gap, this paper puts forward a hybrid top-N recommendation algorithm that combines mutual trust and influence. Firstly, a new trust measurement method was developed based on dynamic weight, considering the difference of trust strength between users. Secondly, a new mutual influence measurement model was designed based on trust relationship, in light of the social network topology. Finally, two hybrid recommendation algorithms, denoted as FSTA(Factored Similarity model with Trust Approach) and FSTI(Factored similarity models with trust and influence), were presented to solve the data sparsity and binarity. The two algorithms integrate user similarity, item similarity, mutual trust and mutual influence. Our approach was compared with several other recommendation algorithms on three standard datasets: FilmTrust, Epinions and Ciao. The experimental results proved the high efficiency of our approach

    A Trust Management Framework for Decision Support Systems

    Get PDF
    In the era of information explosion, it is critical to develop a framework which can extract useful information and help people to make “educated” decisions. In our lives, whether we are aware of it, trust has turned out to be very helpful for us to make decisions. At the same time, cognitive trust, especially in large systems, such as Facebook, Twitter, and so on, needs support from computer systems. Therefore, we need a framework that can effectively, but also intuitively, let people express their trust, and enable the system to automatically and securely summarize the massive amounts of trust information, so that a user of the system can make “educated” decisions, or at least not blind decisions. Inspired by the similarities between human trust and physical measurements, this dissertation proposes a measurement theory based trust management framework. It consists of three phases: trust modeling, trust inference, and decision making. Instead of proposing specific trust inference formulas, this dissertation proposes a fundamental framework which is flexible and can be adapted by many different inference formulas. Validation experiments are done on two data sets: the Epinions.com data set and the Twitter data set. This dissertation also adapts the measurement theory based trust management framework for two decision support applications. In the first application, the real stock market data is used as ground truth for the measurement theory based trust management framework. Basically, the correlation between the sentiment expressed on Twitter and stock market data is measured. Compared with existing works which do not differentiate tweets’ authors, this dissertation analyzes trust among stock investors on Twitter and uses the trust network to differentiate tweets’ authors. The results show that by using the measurement theory based trust framework, Twitter sentiment valence is able to reflect abnormal stock returns better than treating all the authors as equally important or weighting them by their number of followers. In the second application, the measurement theory based trust management framework is used to help to detect and prevent from being attacked in cloud computing scenarios. In this application, each single flow is treated as a measurement. The simulation results show that the measurement theory based trust management framework is able to provide guidance for cloud administrators and customers to make decisions, e.g. migrating tasks from suspect nodes to trustworthy nodes, dynamically allocating resources according to trust information, and managing the trade-off between the degree of redundancy and the cost of resources

    Individual Opinions Versus Collective Opinions in Trust Modelling

    Get PDF
    International audienceSocial web permits users to acquire information from anonymous people around the world. This leads to a serious question about the trustworthiness of the information and the sources. During the last decade, numerous models were proposed to adapt social trust to social web. These models aim to assist the user in becoming able to state his opinion about the acquired information and their sources based on their trustworthiness. Usually, opinions can be based on two mechanisms to acquire knowledge: evaluating previous interactions with the source (individual knowledge), and word of mouth mechanism where the user relies on the knowledge of his friends and their friends (collective knowledge). In this paper, we are interested in the impact of using each of these mechanisms on the performance of trust models. Subjective logic (SL) is an extension of probabilistic logic that deals with the cases of lack of evidence. It supplies framework for modelling trust on the web. We use SL in this paper to build and compare two trust models. The first one gives priority to individual opinions, and uses collective opinions only in the case of absence of individual opinions. The second considers only collective opinions permanently, so it always provides the most complete knowledge that leads to improving the performance of the model

    Novel Directions for Multiagent Trust Modeling in Online Social Networks

    Get PDF
    This thesis presents two works with the shared goal of improving the capacity of multiagent trust modeling to be applied to social networks. The first demonstrates how analyzing the responses to content on a discussion forum can be used to detect certain types of undesirable behaviour. This technique can be used to extract quantified representations of the impact agents are having on the community, a critical component for trust modeling. The second work expands on the technique of multi-faceted trust modeling, determining whether a clustering step designed to group agents by similarity can improve the performance of trust link predictors. Specifically, we hypothesize that learning a distinct model for each cluster of similar users will result in more personalized, and therefore more accurate, predictions. Online social networks have exploded in popularity over the course of the last decade, becoming a central source of information and entertainment for millions of users. This radical democratization of the flow of information, while purporting many benefits, also raises a raft of new issues. These networks have proven to be a potent medium for the spread of misinformation and rumors, may contribute to the radicalization of communities, and are vulnerable to deliberate manipulation by bad actors. In this thesis, our primary aim is to examine content recommendation on social media through the lens of trust modeling. The central supposition along this path is that the behaviors of content creators and the consumers of their content can be fit into the trust modeling framework, supporting recommendations of content from creators who not only are popular, but have the support of trustworthy users and are trustworthy themselves. This research direction shows promise for tackling many of the issues we've mentioned. Our works show that a machine learning model can predict certain types of anti-social behaviour in a discussion starting comment solely on the basis of analyzing replies to that comment with accuracy in the range of 70% to 80%. Further, we show that a clustering based approach to personalization for multi-faceted trust models can increase accuracy on a down-stream trust aware item recommendation task, evaluated on a large data set of Yelp users

    On the Effects of Forced Trust on Implementations of Small Smart Cities

    Get PDF
    As an increasing number of cities pursue the idea of becoming smart cities, the variety in different approaches to reach this goal also grows. They cover the use of a spectrum of implementations for, inter alia, information systems, smart networks, and public services. In order to operate, these smart cities have to process multiple types of data including personal information. Ultimately, the systems and services that process these data are decided by the city with limited opportunities for their citizens to influence the details of their implementations. In these situations the citizens have no choice but to trust their city with the operation of these systems and the processing of their personal information. This type of a relationship, forced trust, affects the smart city implementation both directly and indirectly. These effects include additional considerations by the city to guarantee the protection of the citizens’ privacy and the security of their personal data, as well as the impacts of forced trust on the willingness of the citizens to adopt the offered services. In this thesis, privacy protection, data protection and security, system reliability and safety, and user avoidance were identified as the four major domains of concern for citizens with regard to forced trust. These domains cover most of the main impacts smart city projects have on their citizens, such as ubiquitous data collection, scarcity of control over the utilisation of one’s personal data, and uncertainty of the dependability of critical information systems. Additionally, technological and methodological approaches were proposed to address each of the discussed concerns. These include implementation of privacy by design in the development of the smart city, use of trusted platforms in data processing, detection and alleviation of potential fault chains, and providing the citizens the means to monitor their personal data. Finally, these recommendations were considered in the context of a small smart city. The Salo smart city project was used as an example and the recommendations were applied to the planned aspects of the upcoming smart city, such as knowledge-based management, a smart city application for information sharing, and increased transparency and justifiability in governance
    • …
    corecore