3,439 research outputs found

    Trust beyond reputation: A computational trust model based on stereotypes

    Full text link
    Models of computational trust support users in taking decisions. They are commonly used to guide users' judgements in online auction sites; or to determine quality of contributions in Web 2.0 sites. However, most existing systems require historical information about the past behavior of the specific agent being judged. In contrast, in real life, to anticipate and to predict a stranger's actions in absence of the knowledge of such behavioral history, we often use our "instinct"- essentially stereotypes developed from our past interactions with other "similar" persons. In this paper, we propose StereoTrust, a computational trust model inspired by stereotypes as used in real-life. A stereotype contains certain features of agents and an expected outcome of the transaction. When facing a stranger, an agent derives its trust by aggregating stereotypes matching the stranger's profile. Since stereotypes are formed locally, recommendations stem from the trustor's own personal experiences and perspective. Historical behavioral information, when available, can be used to refine the analysis. According to our experiments using Epinions.com dataset, StereoTrust compares favorably with existing trust models that use different kinds of information and more complete historical information

    Trust and reputation management in decentralized systems

    Get PDF
    In large, open and distributed systems, agents are often used to represent users and act on their behalves. Agents can provide good or bad services or act honestly or dishonestly. Trust and reputation mechanisms are used to distinguish good services from bad ones or honest agents from dishonest ones. My research is focused on trust and reputation management in decentralized systems. Compared with centralized systems, decentralized systems are more difficult and inefficient for agents to find and collect information to build trust and reputation. In this thesis, I propose a Bayesian network-based trust model. It provides a flexible way to present differentiated trust and combine different aspects of trust that can meet agents’ different needs. As a complementary element, I propose a super-agent based approach that facilitates reputation management in decentralized networks. The idea of allowing super-agents to form interest-based communities further enables flexible reputation management among groups of agents. A reward mechanism creates incentives for super-agents to contribute their resources and to be honest. As a single package, my work is able to promote effective, efficient and flexible trust and reputation management in decentralized systems

    TRIDEnT: Building Decentralized Incentives for Collaborative Security

    Full text link
    Sophisticated mass attacks, especially when exploiting zero-day vulnerabilities, have the potential to cause destructive damage to organizations and critical infrastructure. To timely detect and contain such attacks, collaboration among the defenders is critical. By correlating real-time detection information (alerts) from multiple sources (collaborative intrusion detection), defenders can detect attacks and take the appropriate defensive measures in time. However, although the technical tools to facilitate collaboration exist, real-world adoption of such collaborative security mechanisms is still underwhelming. This is largely due to a lack of trust and participation incentives for companies and organizations. This paper proposes TRIDEnT, a novel collaborative platform that aims to enable and incentivize parties to exchange network alert data, thus increasing their overall detection capabilities. TRIDEnT allows parties that may be in a competitive relationship, to selectively advertise, sell and acquire security alerts in the form of (near) real-time peer-to-peer streams. To validate the basic principles behind TRIDEnT, we present an intuitive game-theoretic model of alert sharing, that is of independent interest, and show that collaboration is bound to take place infinitely often. Furthermore, to demonstrate the feasibility of our approach, we instantiate our design in a decentralized manner using Ethereum smart contracts and provide a fully functional prototype.Comment: 28 page

    Trustworthy Federated Learning: A Survey

    Full text link
    Federated Learning (FL) has emerged as a significant advancement in the field of Artificial Intelligence (AI), enabling collaborative model training across distributed devices while maintaining data privacy. As the importance of FL increases, addressing trustworthiness issues in its various aspects becomes crucial. In this survey, we provide an extensive overview of the current state of Trustworthy FL, exploring existing solutions and well-defined pillars relevant to Trustworthy . Despite the growth in literature on trustworthy centralized Machine Learning (ML)/Deep Learning (DL), further efforts are necessary to identify trustworthiness pillars and evaluation metrics specific to FL models, as well as to develop solutions for computing trustworthiness levels. We propose a taxonomy that encompasses three main pillars: Interpretability, Fairness, and Security & Privacy. Each pillar represents a dimension of trust, further broken down into different notions. Our survey covers trustworthiness challenges at every level in FL settings. We present a comprehensive architecture of Trustworthy FL, addressing the fundamental principles underlying the concept, and offer an in-depth analysis of trust assessment mechanisms. In conclusion, we identify key research challenges related to every aspect of Trustworthy FL and suggest future research directions. This comprehensive survey serves as a valuable resource for researchers and practitioners working on the development and implementation of Trustworthy FL systems, contributing to a more secure and reliable AI landscape.Comment: 45 Pages, 8 Figures, 9 Table

    Trustworthy Edge Machine Learning: A Survey

    Full text link
    The convergence of Edge Computing (EC) and Machine Learning (ML), known as Edge Machine Learning (EML), has become a highly regarded research area by utilizing distributed network resources to perform joint training and inference in a cooperative manner. However, EML faces various challenges due to resource constraints, heterogeneous network environments, and diverse service requirements of different applications, which together affect the trustworthiness of EML in the eyes of its stakeholders. This survey provides a comprehensive summary of definitions, attributes, frameworks, techniques, and solutions for trustworthy EML. Specifically, we first emphasize the importance of trustworthy EML within the context of Sixth-Generation (6G) networks. We then discuss the necessity of trustworthiness from the perspective of challenges encountered during deployment and real-world application scenarios. Subsequently, we provide a preliminary definition of trustworthy EML and explore its key attributes. Following this, we introduce fundamental frameworks and enabling technologies for trustworthy EML systems, and provide an in-depth literature review of the latest solutions to enhance trustworthiness of EML. Finally, we discuss corresponding research challenges and open issues.Comment: 27 pages, 7 figures, 10 table
    • …
    corecore