647 research outputs found

    Can We Trust Trust Management Systems?

    Get PDF
    The Internet of Things is enriching our life with an ecosystem of interconnected devices. Object cooperation allows us to develop complex applications in which each node contributes one or more services. Therefore, the information moves from a provider to a requester node in a peer-to-peer network. In that scenario, trust management systems (TMSs) have been developed to prevent the manipulation of data by unauthorized entities and guarantee the detection of malicious behaviour. The community concentrates effort on designing complex trust techniques to increase their effectiveness; however, two strong assumptions have been overlooked. First, nodes could provide the wrong services due to malicious behaviours or malfunctions and insufficient accuracy. Second, the requester nodes usually cannot evaluate the received service perfectly. For this reason, a trust system should distinguish attackers from objects with poor performance and consider service evaluation errors. Simulation results prove that advanced trust algorithms are unnecessary for such scenarios with these deficiencies

    Evaluation Theory for Characteristics of Cloud Identity Trust Framework

    Get PDF
    Trust management is a prominent area of security in cloud computing because insufficient trust management hinders cloud growth. Trust management systems can help cloud users to make the best decision regarding the security, privacy, Quality of Protection (QoP), and Quality of Service (QoS). A Trust model acts as a security strength evaluator and ranking service for the cloud and cloud identity applications and services. It might be used as a benchmark to setup the cloud identity service security and to find the inadequacies and enhancements in cloud infrastructure. This chapter addresses the concerns of evaluating cloud trust management systems, data gathering, and synthesis of theory and data. The conclusion is that the relationship between cloud identity providers and Cloud identity users can greatly benefit from the evaluation and critical review of current trust models

    Shinren : Non-monotonic trust management for distributed systems

    Get PDF
    The open and dynamic nature of modern distributed systems and pervasive environments presents significant challenges to security management. One solution may be trust management which utilises the notion of trust in order to specify and interpret security policies and make decisions on security-related actions. Most trust management systems assume monotonicity where additional information can only result in the increasing of trust. The monotonic assumption oversimplifies the real world by not considering negative information, thus it cannot handle many real world scenarios. In this paper we present Shinren, a novel non-monotonic trust management system based on bilattice theory and the anyworld assumption. Shinren takes into account negative information and supports reasoning with incomplete information, uncertainty and inconsistency. Information from multiple sources such as credentials, recommendations, reputation and local knowledge can be used and combined in order to establish trust. Shinren also supports prioritisation which is important in decision making and resolving modality conflicts that are caused by non-monotonicity

    An operational framework to reason about policy behavior in trust management systems

    Get PDF
    In this paper we show that the logical framework proposed by Becker et al. to reason about security policy behavior in a trust management context can be captured by an operational framework that is based on the language proposed by Miller to deal with scoping and/or modules in logic programming in 1989. The framework of Becker et al. uses propositional Horn clauses to represent both policies and credentials, implications in clauses are interpreted in counterfactual logic, a Hilbert-style proof is defined and a system based on SAT is used to proof whether properties about credentials, permissions and policies are valid in trust management systems, i.e. formulas that are true for all possible policies. Our contribution is to show that instead of using a SAT system, this kind of validation can rely on the operational semantics (derivability relation) of Miller’s language, which is very close to derivability in logic programs, opening up the possibility to extend Becker et al.’s framework to the more practical first order case since Miller’s language is first order.Peer ReviewedPreprin

    A Formal Framework for Concrete Reputation Systems

    Get PDF
    In a reputation-based trust-management system, agents maintain information about the past behaviour of other agents. This information is used to guide future trust-based decisions about interaction. However, while trust management is a component in security decision-making, many existing reputation-based trust-management systems provide no formal security-guarantees. In this extended abstract, we describe a mathematical framework for a class of simple reputation-based systems. In these systems, decisions about interaction are taken based on policies that are exact requirements on agents’ past histories. We present a basic declarative language, based on pure-past linear temporal logic, intended for writing simple policies. While the basic language is reasonably expressive (encoding e.g. Chinese Wall policies) we show how one can extend it with quantification and parameterized events. This allows us to encode other policies known from the literature, e.g., ‘one-out-of-k’. The problem of checking a history with respect to a policy is efficient for the basic language, and tractable for the quantified language when policies do not have too many variables

    Filtering Dishonest Trust Recommendations in Trust Management Systems in Mobile Ad Hoc Networks

    Get PDF
    Trust recommendations, having a pivotal role in computation of trust and hence confidence in peer to peer (P2P) environment, if hampered, may entail in colossal attacks from dishonest recommenders such as bad mouthing, ballot stuffing, random opinion etc. Therefore, mitigation of dishonest trust recommendations is stipulated as a challenging research issue in P2P systems (esp in Mobile Ad Hoc Networks). In order to cater these challenges associated with dishonest trust recommendations, a technique named “intelligently Selection of Trust Recommendations based on Dissimilarity factor (iSTRD)” has been devised for Mobile Ad Hoc Networks.  iSTRD exploits  personal experience of an “evaluating node” in conjunction with majority vote of the recommenders. It successfully removes the recommendations of “low trustworthy recommenders” as well as dishonest recommendations of “highly trustworthy recommenders”. Efficacy of proposed approach is evident from enhanced accuracy of “recognition rate”, “false rejection” and “false acceptance”.  Moreover, experiential results depict that iSTRD has unprecedented performance compared to contemporary techniques in presence of attacks asserted

    Feedback credibility issues in trust management systems

    Full text link
    The following topics are dealt with: soft computing in intelligent multimedia; grid and pervasive computing security; interactive multimedia &amp; intelligent services in mobile and ubiquitous computing; data management in ubiquitous computing; smart living space; software effectiveness and efficiency.<br /

    Composing Trust Models towards Interoperable Trust Management

    Get PDF
    Part 2: Full PapersInternational audienceComputational trust is a central paradigm in today's Internet as our modern society is increasingly relying upon online transactions and social net- works. This is indeed leading to the introduction of various trust management systems and associated trust models, which are customized according to their target applications. However, the heterogeneity of trust models prevents exploiting the trust knowledge acquired in one context in another context although this would be beneficial for the digital, ever-connected environment. This is such an issue that this paper addresses by introducing an approach to achieve interoperability between heterogeneous trust management systems. Specifically, we define a trust meta-model that allows the rigorous specification of trust models as well as their composition. The resulting composite trust models enable heterogeneous trust management systems to interoperate transparently through mediators
    corecore