172,479 research outputs found

    A Personalized Framework for Trust Assessment

    No full text
    The number of computational trust models has been increasing quickly in recent years yet their applications for automating trust evaluation are still limited. The main obstacle is the difficulties in selecting a suitable trust model and adapting it for particular trust modeling requirements, which varies greatly due to the subjectivity of human trust. The Personalized Trust Framework (PTF) presented in this paper aims to address this problem by providing a mechanism for human users to capture their trust evaluation process in order for it to be replicated by computers. In more details, a user can specify how he selects a trust model based on information about the subject whose trustworthiness he needs to evaluate and how that trust model is configured. This trust evaluation process is then automated by the PTF making use of the trust models flexibly plugged into the PTF by the user. By so doing, the PTF enable users reuse and personalize existing trust models to suit their requirements without having to reprogram those models

    Trust Strategies for the Semantic Web

    Get PDF
    Everyone agrees on the importance of enabling trust on the SemanticWebto ensure more efficient agent interaction. Current research on trust seems to focus on developing computational models, semantic representations, inference techniques, etc. However, little attention has been given to the plausible trust strategies or tactics that an agent can follow when interacting with other agents on the Semantic Web. In this paper we identify five most common strategies of trust and discuss their envisaged costs and benefits. The aim is to provide some guidelines to help system developers appreciate the risks and gains involved with each trust strategy

    Trust beyond reputation: A computational trust model based on stereotypes

    Full text link
    Models of computational trust support users in taking decisions. They are commonly used to guide users' judgements in online auction sites; or to determine quality of contributions in Web 2.0 sites. However, most existing systems require historical information about the past behavior of the specific agent being judged. In contrast, in real life, to anticipate and to predict a stranger's actions in absence of the knowledge of such behavioral history, we often use our "instinct"- essentially stereotypes developed from our past interactions with other "similar" persons. In this paper, we propose StereoTrust, a computational trust model inspired by stereotypes as used in real-life. A stereotype contains certain features of agents and an expected outcome of the transaction. When facing a stranger, an agent derives its trust by aggregating stereotypes matching the stranger's profile. Since stereotypes are formed locally, recommendations stem from the trustor's own personal experiences and perspective. Historical behavioral information, when available, can be used to refine the analysis. According to our experiments using Epinions.com dataset, StereoTrust compares favorably with existing trust models that use different kinds of information and more complete historical information

    Towards Trust-Aware Human-Automation Interaction: An Overview of the Potential of Computational Trust Models

    Get PDF
    Several computational models have been proposed to quantify trust and its relationship to other system variables. However, these models are still under-utilised in human-machine interaction settings due to the gap between modellers’ intent to capture a phenomenon and the requirements for employing the models in a practical context. Our work amalgamates insights from the system modelling, trust, and human-autonomy teaming literature to address this gap. We explore the potential of computational trust models in the development of trust-aware systems by investigating three research questions: 1- At which stages of development can trust models be used by designers? 2- how can trust models contribute to trust-aware systems? 3- which factors should be incorporated within trust models to enhance models’ effectiveness and usability? We conclude with future research directions

    Matrix powers algorithms for trust evaluation in PKI architectures

    Get PDF
    This paper deals with the evaluation of trust in public-key infrastructures. Different trust models have been proposed to interconnect the various PKI components in order to propagate the trust between them. In this paper we provide a new polynomial algorithm using linear algebra to assess trust relationships in a network using different trust evaluation schemes. The advantages are twofold: first the use of matrix computations instead of graph algorithms provides an optimized computational solution; second, our algorithm can be used for generic graphs, even in the presence of cycles. Our algorithm is designed to evaluate the trust using all existing (finite) trust paths between entities as a preliminary to any exchanges between PKIs. This can give a precise evaluation of trust, and accelerate for instance cross-certificate validation

    Purging of untrustworthy recommendations from a grid

    Full text link
    In grid computing, trust has massive significance. There is lot of research to propose various models in providing trusted resource sharing mechanisms. The trust is a belief or perception that various researchers have tried to correlate with some computational model. Trust on any entity can be direct or indirect. Direct trust is the impact of either first impression over the entity or acquired during some direct interaction. Indirect trust is the trust may be due to either reputation gained or recommendations received from various recommenders of a particular domain in a grid or any other domain outside that grid or outside that grid itself. Unfortunately, malicious indirect trust leads to the misuse of valuable resources of the grid. This paper proposes the mechanism of identifying and purging the untrustworthy recommendations in the grid environment. Through the obtained results, we show the way of purging of untrustworthy entities.Comment: 8 pages, 4 figures, 1 table published by IJNGN journal; International Journal of Next-Generation Networks (IJNGN) Vol.3, No.4, December 201

    Trust-Based Techniques for Collective Intelligence in Social Search Systems.

    Get PDF
    A key-issue for the effectiveness of collaborative decision support systems is the problem of the trustworthiness of the entities involved in the process. Trust has been always used by humans as a form of collective intelligence to support effective decision making process. Computational trust models are becoming now a popular technique across many applications such as cloud computing, p2p networks, wikis, e-commerce sites, social network. The chapter provides an overview of the current landscape of computational models of trust and reputation, and it presents an experimental study case in the domain of social search, where we show how trust techniques can be applied to enhance the quality of social search engine predictions

    A Bayesian model for event-based trust

    No full text
    The application scenarios envisioned for ‘global ubiquitous computing’ have unique requirements that are often incompatible with traditional security paradigms. One alternative currently being investigated is to support security decision-making by explicit representation of principals’ trusting relationships, i.e., via systems for computational trust. We focus here on systems where trust in a computational entity is interpreted as the expectation of certain future behaviour based on behavioural patterns of the past, and concern ourselves with the foundations of such probabilistic systems. In particular, we aim at establishing formal probabilistic models for computational trust and their fundamental properties. In the paper we define a mathematical measure for quantitatively comparing the effectiveness of probabilistic computational trust systems in various environments. Using it, we compare some of the systems from the computational trust literature; the comparison is derived formally, rather than obtained via experimental simulation as traditionally done. With this foundation in place, we formalise a general notion of information about past behaviour, based on event structures. This yields a flexible trust model where the probability of complex protocol outcomes can be assessed
    • 

    corecore