84,677 research outputs found

    A Trust-based Message Evaluation and Propagation Framework in Vehicular Ad-Hoc Networks

    Get PDF
    In this paper, we propose a trust-based message propagation and evaluation framework to support the effective evaluation of information sent by peers and the immediate control of false information in a VANET. More specifically, our trust-based message propagation collects peers’ trust opinions about a message sent by a peer (message sender) during the propagation of the message. We improve on an existing cluster-based data routing mechanism by employing a secure and efficient identity-based aggregation scheme for the aggregation and propagation of the sender’s message and the trust opinions. These trust opinions weighted by the trustworthiness of the peers modeled using a combination of role-based and experience-based trust metrics are used by cluster leaders to compute a ma jority opinion about the sender’s message, in order to proactively detect false information. Malicious messages are dropped and controlled to a local minimum without further affecting other peers. Our trust-based message evaluation allows each peer to evaluate the trustworthiness of the message by also taking into account other peers’ trust opinions about the message and the peer-to-peer trust of these peers. The result of the evaluation derives an effective action decision for the peer. We evaluate our framework in simulations of real life traffic scenarios by employing real maps with vehicle entities following traffic rules and road limits. Some entities involved in the simulations are possibly malicious and may send false information to mislead others or spread spam messages to jam the network. Experimental results demonstrate that our framework significantly improves network scalability by reducing the utilization of wireless bandwidth caused by a large number of malicious messages. Our system is also demonstrated to be effective in mitigating against malicious messages and protecting peers from being affected. Thus, our framework is particularly valuable in the deployment of VANETs by achieving a high level of scalability and effectiveness

    Security in online learning assessment towards an effective trustworthiness approach to support e-learning teams

    Get PDF
    (c) 2014 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.This paper proposes a trustworthiness model for the design of secure learning assessment in on-line collaborative learning groups. Although computer supported collaborative learning has been widely adopted in many educational institutions over the last decade, there exist still drawbacks which limit their potential in collaborative learning activities. Among these limitations, we investigate information security requirements in on-line assessment, (e-assessment), which can be developed in collaborative learning contexts. Despite information security enhancements have been developed in recent years, to the best of our knowledge, integrated and holistic security models have not been completely carried out yet. Even when security advanced methodologies and technologies are deployed in Learning Management Systems, too many types of vulnerabilities still remain opened and unsolved. Therefore, new models such as trustworthiness approaches can overcome these lacks and support e-assessment requirements for e-Learning. To this end, a trustworthiness model is designed in order to conduct the guidelines of a holistic security model for on-line collaborative learning through effective trustworthiness approaches. In addition, since users' trustworthiness analysis involves large amounts of ill-structured data, a parallel processing paradigm is proposed to build relevant information modeling trustworthiness levels for e-Learning.Peer ReviewedPostprint (author's final draft

    The SECURE collaboration model

    Get PDF
    The SECURE project has shown how trust can be made computationally tractable while retaining a reasonable connection with human and social notions of trust. SECURE has produced a well-founded theory of trust that has been tested and refined through use in real software such as collaborative spam filtering and electronic purse. The software comprises the SECURE kernel with extensions for policy specification by application developers. It has yet to be applied to large-scale, multi-domain distributed systems taking different application contexts into account. The project has not considered privacy in evidence distribution, a crucial issue for many application domains, including public services such as healthcare and police. The SECURE collaboration model has similarities with the trust domain concept, embodying the interaction set of a principal, but SECURE is primarily concerned with pseudonymous entities rather than domain-structured systems

    Local and Global Trust Based on the Concept of Promises

    Get PDF
    We use the notion of a promise to define local trust between agents possessing autonomous decision-making. An agent is trustworthy if it is expected that it will keep a promise. This definition satisfies most commonplace meanings of trust. Reputation is then an estimation of this expectation value that is passed on from agent to agent. Our definition distinguishes types of trust, for different behaviours, and decouples the concept of agent reliability from the behaviour on which the judgement is based. We show, however, that trust is fundamentally heuristic, as it provides insufficient information for agents to make a rational judgement. A global trustworthiness, or community trust can be defined by a proportional, self-consistent voting process, as a weighted eigenvector-centrality function of the promise theoretical graph

    Trustworthiness assessment of cow behaviour data collected in a wireless sensor network

    Get PDF
    Wireless sensor networks can be used for automated cow monitoring, e.g. for behaviour and locomotion monitoring. Sensor data should only be used when they can be trusted. The trustworthiness of sensor data can be assessed in a framework, from the acquisition at the node to their delivery to business applications, including any intermediary routing and processing. The trustworthiness assessment method has been evaluated with sensor data collected during one of the experiments within the WASP project. Sensor data are not trusted when the trustworthiness gets below a threshold. An alert is generated then and it is possible to find the cause by tracing back the trust of composing elements. The trustworthiness assessment method results in the detection of problems with nodes (e.g. detached node or exhausted battery). Most of these problems can be classified as true and most of them were not notified on the farm. Therefore trustworthiness assessment is worthwhile to improve automated cow status monitoring

    Context-dependent Trust Decisions with Subjective Logic

    Full text link
    A decision procedure implemented over a computational trust mechanism aims to allow for decisions to be made regarding whether some entity or information should be trusted. As recognised in the literature, trust is contextual, and we describe how such a context often translates into a confidence level which should be used to modify an underlying trust value. J{\o}sang's Subjective Logic has long been used in the trust domain, and we show that its operators are insufficient to address this problem. We therefore provide a decision-making approach about trust which also considers the notion of confidence (based on context) through the introduction of a new operator. In particular, we introduce general requirements that must be respected when combining trustworthiness and confidence degree, and demonstrate the soundness of our new operator with respect to these properties.Comment: 19 pages, 4 figures, technical report of the University of Aberdeen (preprint version

    A Trust-based Recruitment Framework for Multi-hop Social Participatory Sensing

    Full text link
    The idea of social participatory sensing provides a substrate to benefit from friendship relations in recruiting a critical mass of participants willing to attend in a sensing campaign. However, the selection of suitable participants who are trustable and provide high quality contributions is challenging. In this paper, we propose a recruitment framework for social participatory sensing. Our framework leverages multi-hop friendship relations to identify and select suitable and trustworthy participants among friends or friends of friends, and finds the most trustable paths to them. The framework also includes a suggestion component which provides a cluster of suggested friends along with the path to them, which can be further used for recruitment or friendship establishment. Simulation results demonstrate the efficacy of our proposed recruitment framework in terms of selecting a large number of well-suited participants and providing contributions with high overall trust, in comparison with one-hop recruitment architecture.Comment: accepted in DCOSS 201
    corecore