6 research outputs found

    Semantic Security for E-Health: A Case Study in Enhanced Access Control

    Get PDF
    Data collection, access and usage are essential for many forms of collaborative research. E-Health represents one area with much to gain by sharing of data across organisational boundaries. In such contexts, security and access control are essential to protect the often complex, privacy and information governance concerns of associated stakeholders. In this paper we argue that semantic technologies have unique benefits for specification and enforcement of security policies that cross organisation boundaries. We illustrate this through a case study based around the International Niemann-Pick Disease (NPD) Registry (www.inpdr.org) - which typifies many current e-Health security processes and policies. We show how approaches based upon ontology-based policy specification overcome many of the current security challenges facing the development of such systems and enhance access control by leveraging existing security information associated with clinical collaborators

    A Survey on Trust Computation in the Internet of Things

    Get PDF
    Internet of Things defines a large number of diverse entities and services which interconnect with each other and individually or cooperatively operate depending on context, conditions and environments, produce a huge personal and sensitive data. In this scenario, the satisfaction of privacy, security and trust plays a critical role in the success of the Internet of Things. Trust here can be considered as a key property to establish trustworthy and seamless connectivity among entities and to guarantee secure services and applications. The aim of this study is to provide a survey on various trust computation strategies and identify future trends in the field. We discuss trust computation methods under several aspects and provide comparison of the approaches based on trust features, performance, advantages, weaknesses and limitations of each strategy. Finally the research discuss on the gap of the trust literature and raise some research directions in trust computation in the Internet of Things

    Modeling and evaluation of trusts in Multi-agent systems

    Get PDF
    Master'sMASTER OF ENGINEERIN

    Preference Uncertainty and Trust in Decision Making

    Get PDF
    A fuzzy approach for handling uncertain preferences is developed within the paradigm of the Graph Model for Conflict Resolution and new advances in trust modeling and assessment are put forward for permitting decision makers (DMs) to decide with whom to cooperate and trust in order to move from a potential resolution to a more preferred one that is not attainable on an individual basis. The applicability and the usefulness of the fuzzy preference and trust research for giving an enhanced strategic understanding about a dispute and its possible resolution are demonstrated by employing a realworld environmental conflict as well as two generic games that represent a wide range of real life encounters dealing with trust and cooperation dilemmas. The introduction of the uncertain preference representation extends the applicability of the Graph Model for Conflict Resolution to handle conflicts with missing or incomplete preference information. Assessing the presence of trust will help to compensate for the missing information and bridge the gap between a desired outcome and a feared betrayal. These advances in the areas of uncertain preferences and trust have potential applications in engineering decision making, electronic commerce, multiagent systems, international trade and many other areas where conflict is present. In order to model a conflict, it is assumed that the decision makers, options, and the preferences of the decision makers over possible states are known. However, it is often the case that the preferences are not known for certain. This could be due to lack of information, impreciseness, or misinformation intentionally supplied by a competitor. Fuzzy logic is applied to handle this type of information. In particular, it allows a decision maker to express preferences using linguistic terms rather than exact values. It also makes use of data intervals rather than crisp values which could accommodate minor shifts in values without drastically changing the overall results. The four solution concepts of Nash, general metarationality, symmetric metarationality, and sequential stability for determining stability and potential resolutions to a conflict, are extended to accommodate the new fuzzy preference representation. The newly proposed solution concepts are designed to work for two and more than two decision maker cases. Hypothetical and real life conflicts are used to demonstrate the applicability of this newly proposed procedure. Upon reaching a conflict resolution, it might be in the best interests of some of the decision makers to cooperate and form a coalition to move from the current resolution to a better one that is not achievable on an individual basis. This may require moving to an intermediate state or states which may be less preferred by some of the coalition members while being more preferred by others compared to the original or the final state. When the move is irreversible, which is the case in most real life situations, this requires the existence of a minimum level of trust to remove any fears of betrayal. The development of trust modeling and assessment techniques, allows decision makers to decide with whom to cooperate and trust. Illustrative examples are developed to show how this modeling works in practice. The new theoretical developments presented in this research enhance the applicability of the Graph Model for Conflict Resolution. The proposed trust modeling allows a reasonable way of analyzing and predicting the formation of coalitions in conflict analysis and cooperative game theory. It also opens doors for further research and developments in trust modeling in areas such as electronic commerce and multiagent systems

    Trust Evaluation in the IoT Environment

    Get PDF
    Along with the many benefits of IoT, its heterogeneity brings a new challenge to establish a trustworthy environment among the objects due to the absence of proper enforcement mechanisms. Further, it can be observed that often these encounters are addressed only concerning the security and privacy matters involved. However, such common network security measures are not adequate to preserve the integrity of information and services exchanged over the internet. Hence, they remain vulnerable to threats ranging from the risks of data management at the cyber-physical layers, to the potential discrimination at the social layer. Therefore, trust in IoT can be considered as a key property to enforce trust among objects to guarantee trustworthy services. Typically, trust revolves around assurance and confidence that people, data, entities, information, or processes will function or behave in expected ways. However, trust enforcement in an artificial society like IoT is far more difficult, as the things do not have an inherited judgmental ability to assess risks and other influencing factors to evaluate trust as humans do. Hence, it is important to quantify the perception of trust such that it can be understood by the artificial agents. In computer science, trust is considered as a computational value depicted by a relationship between trustor and trustee, described in a specific context, measured by trust metrics, and evaluated by a mechanism. Several mechanisms about trust evaluation can be found in the literature. Among them, most of the work has deviated towards security and privacy issues instead of considering the universal meaning of trust and its dynamic nature. Furthermore, they lack a proper trust evaluation model and management platform that addresses all aspects of trust establishment. Hence, it is almost impossible to bring all these solutions to one place and develop a common platform that resolves end-to-end trust issues in a digital environment. Therefore, this thesis takes an attempt to fill these spaces through the following research work. First, this work proposes concrete definitions to formally identify trust as a computational concept and its characteristics. Next, a well-defined trust evaluation model is proposed to identify, evaluate and create trust relationships among objects for calculating trust. Then a trust management platform is presented identifying the major tasks of trust enforcement process including trust data collection, trust data management, trust information analysis, dissemination of trust information and trust information lifecycle management. Next, the thesis proposes several approaches to assess trust attributes and thereby the trust metrics of the above model for trust evaluation. Further, to minimize dependencies with human interactions in evaluating trust, an adaptive trust evaluation model is presented based on the machine learning techniques. From a standardization point of view, the scope of the current standards on network security and cybersecurity needs to be expanded to take trust issues into consideration. Hence, this thesis has provided several inputs towards standardization on trust, including a computational definition of trust, a trust evaluation model targeting both object and data trust, and platform to manage the trust evaluation process

    Evaluation of Trust in the Internet Of Things: Models, Mechanisms And Applications

    Get PDF
    In the blooming era of the Internet of Things (IoT), trust has become a vital factor for provisioning reliable smart services without human intervention by reducing risk in autonomous decision making. However, the merging of physical objects, cyber components and humans in the IoT infrastructure has introduced new concerns for the evaluation of trust. Consequently, a large number of trust-related challenges have been unsolved yet due to the ambiguity of the concept of trust and the variety of divergent trust models and management mechanisms in different IoT scenarios. In this PhD thesis, my ultimate goal is to propose an efficient and practical trust evaluation mechanisms for any two entities in the IoT. To achieve this goal, the first important objective is to augment the generic trust concept and provide a conceptual model of trust in order to come up with a comprehensive understanding of trust, influencing factors and possible Trust Indicators (TI) in the context of IoT. Following the catalyst, as the second objective, a trust model called REK comprised of the triad Reputation, Experience and Knowledge TIs is proposed which covers multi-dimensional aspects of trust by incorporating heterogeneous information from direct observation, personal experiences to global opinions. The mathematical models and evaluation mechanisms for the three TIs in the REK trust model are proposed. Knowledge TI is as “direct trust” rendering a trustor’s understanding of a trustee in respective scenarios that can be obtained based on limited available information about characteristics of the trustee, environment and the trustor’s perspective using a variety of techniques. Experience and Reputation TIs are originated from social features and extracted based on previous interactions among entities in IoT. The mathematical models and calculation mechanisms for the Experience and Reputation TIs also proposed leveraging sociological behaviours of humans in the real-world; and being inspired by the Google PageRank in the web-ranking area, respectively. The REK Trust Model is also applied in variety of IoT scenarios such as Mobile Crowd-Sensing (MCS), Car Sharing service, Data Sharing and Exchange platform in Smart Cities and in Vehicular Networks; and for empowering Blockchain-based systems. The feasibility and effectiveness of the REK model and associated evaluation mechanisms are proved not only by the theoretical analysis but also by real-world applications deployed in our ongoing TII and Wise-IoT projects
    corecore