388 research outputs found

    Reputation in multi agent systems and the incentives to provide feedback

    Get PDF
    The emergence of the Internet leads to a vast increase in the number of interactions between parties that are completely alien to each other. In general, such transactions are likely to be subject to fraud and cheating. If such systems use computerized rational agents to negotiate and execute transactions, mechanisms that lead to favorable outcomes for all parties instead of giving rise to defective behavior are necessary to make the system work: trust and reputation mechanisms. This paper examines different incentive mechanisms helping these trust and reputation mechanisms in eliciting users to report own experiences honestly. --Trust,Reputation

    Filtering Dishonest Trust Recommendations in Trust Management Systems in Mobile Ad Hoc Networks

    Get PDF
    Trust recommendations, having a pivotal role in computation of trust and hence confidence in peer to peer (P2P) environment, if hampered, may entail in colossal attacks from dishonest recommenders such as bad mouthing, ballot stuffing, random opinion etc. Therefore, mitigation of dishonest trust recommendations is stipulated as a challenging research issue in P2P systems (esp in Mobile Ad Hoc Networks). In order to cater these challenges associated with dishonest trust recommendations, a technique named “intelligently Selection of Trust Recommendations based on Dissimilarity factor (iSTRD)” has been devised for Mobile Ad Hoc Networks.  iSTRD exploits  personal experience of an “evaluating node” in conjunction with majority vote of the recommenders. It successfully removes the recommendations of “low trustworthy recommenders” as well as dishonest recommendations of “highly trustworthy recommenders”. Efficacy of proposed approach is evident from enhanced accuracy of “recognition rate”, “false rejection” and “false acceptance”.  Moreover, experiential results depict that iSTRD has unprecedented performance compared to contemporary techniques in presence of attacks asserted

    Developing a Reference Framework for Cybercraft Trust Evaluation

    Get PDF
    It should be no surprise that Department of Defense (DoD) and U.S. Air Force (USAF) networks are the target of constant attack. As a result, network defense remains a high priority for cyber warriors. On the technical side, trust issues for a comprehensive end-to-end network defense solution are abundant and involve multiple layers of complexity. The Air Force Research Labs (AFRL) is currently investigating the feasibility of a holistic approach to network defense, called Cybercraft. We envision Cybercraft to be trusted computer entities that cooperate with other Cybercraft to provide autonomous and responsive network defense services. A top research goal related to Cybercraft centers around how we may examine and ultimately prove features related to this root of trust. In this work, we investigate use-case scenarios for Cybercraft operation with a view towards analyzing and expressing trust requirements inherent in the environment. Based on a limited subset of functional requirements for Cybercraft in terms of their role, we consider how current trust models may be used to answer various questions of trust between components. We characterize generic model components that assist in answering questions regarding Cybercraft trust and pose relevant comparison criteria as evaluation points for various (existing) trust models. The contribution of this research is a framework for comparing trust models that are applicable to similar network-based architectures. Ultimately, we provide a reference evaluation framework for how (current and future) trust models may be developed or integrated into the Cybercraft architecture

    Architecture and Implementation of a Trust Model for Pervasive Applications

    Get PDF
    Collaborative effort to share resources is a significant feature of pervasive computing environments. To achieve secure service discovery and sharing, and to distinguish between malevolent and benevolent entities, trust models must be defined. It is critical to estimate a device\u27s initial trust value because of the transient nature of pervasive smart space; however, most of the prior research work on trust models for pervasive applications used the notion of constant initial trust assignment. In this paper, we design and implement a trust model called DIRT. We categorize services in different security levels and depending on the service requester\u27s context information, we calculate the initial trust value. Our trust value is assigned for each device and for each service. Our overall trust estimation for a service depends on the recommendations of the neighbouring devices, inference from other service-trust values for that device, and direct trust experience. We provide an extensive survey of related work, and we demonstrate the distinguishing features of our proposed model with respect to the existing models. We implement a healthcare-monitoring application and a location-based service prototype over DIRT. We also provide a performance analysis of the model with respect to some of its important characteristics tested in various scenarios

    Evaluating online trust using machine learning methods

    Get PDF
    Trust plays an important role in e-commerce, P2P networks, and information filtering. Current challenges in trust evaluations include: (1) fnding trustworthy recommenders, (2) aggregating heterogeneous trust recommendations of different trust standards based on correlated observations and different evaluation processes, and (3) managing efficiently large trust systems where users may be sparsely connected and have multiple local reputations. The purpose of this dissertation is to provide solutions to these three challenges by applying ordered depth-first search, neural network, and hidden Markov model techniques. It designs an opinion filtered recommendation trust model to derive personal trust from heterogeneous recommendations; develops a reputation model to evaluate recommenders\u27 trustworthiness and expertise; and constructs a distributed trust system and a global reputation model to achieve efficient trust computing and management. The experimental results show that the proposed three trust models are reliable. The contributions lie in: (1) novel application of neural networks in recommendation trust evaluation and distributed trust management; (2) adaptivity of the proposed neural network-based trust models to accommodate dynamic and multifacet properties of trust; (3) robustness of the neural network-based trust models to the noise in training data, such as deceptive recommendations; (4) efficiency and parallelism of computation and load balance in distributed trust evaluations; and (5) novel application of Hidden Markov Models in recommenders\u27 reputation evaluation
    • 

    corecore