7,481 research outputs found

    Data centric trust evaluation and prediction framework for IOT

    Get PDF
    © 2017 ITU. Application of trust principals in internet of things (IoT) has allowed to provide more trustworthy services among the corresponding stakeholders. The most common method of assessing trust in IoT applications is to estimate trust level of the end entities (entity-centric) relative to the trustor. In these systems, trust level of the data is assumed to be the same as the trust level of the data source. However, most of the IoT based systems are data centric and operate in dynamic environments, which need immediate actions without waiting for a trust report from end entities. We address this challenge by extending our previous proposals on trust establishment for entities based on their reputation, experience and knowledge, to trust estimation of data items [1-3]. First, we present a hybrid trust framework for evaluating both data trust and entity trust, which will be enhanced as a standardization for future data driven society. The modules including data trust metric extraction, data trust aggregation, evaluation and prediction are elaborated inside the proposed framework. Finally, a possible design model is described to implement the proposed ideas

    SECMACE: Scalable and Robust Identity and Credential Management Infrastructure in Vehicular Communication Systems

    Full text link
    Several years of academic and industrial research efforts have converged to a common understanding on fundamental security building blocks for the upcoming Vehicular Communication (VC) systems. There is a growing consensus towards deploying a special-purpose identity and credential management infrastructure, i.e., a Vehicular Public-Key Infrastructure (VPKI), enabling pseudonymous authentication, with standardization efforts towards that direction. In spite of the progress made by standardization bodies (IEEE 1609.2 and ETSI) and harmonization efforts (Car2Car Communication Consortium (C2C-CC)), significant questions remain unanswered towards deploying a VPKI. Deep understanding of the VPKI, a central building block of secure and privacy-preserving VC systems, is still lacking. This paper contributes to the closing of this gap. We present SECMACE, a VPKI system, which is compatible with the IEEE 1609.2 and ETSI standards specifications. We provide a detailed description of our state-of-the-art VPKI that improves upon existing proposals in terms of security and privacy protection, and efficiency. SECMACE facilitates multi-domain operations in the VC systems and enhances user privacy, notably preventing linking pseudonyms based on timing information and offering increased protection even against honest-but-curious VPKI entities. We propose multiple policies for the vehicle-VPKI interactions, based on which and two large-scale mobility trace datasets, we evaluate the full-blown implementation of SECMACE. With very little attention on the VPKI performance thus far, our results reveal that modest computing resources can support a large area of vehicles with very low delays and the most promising policy in terms of privacy protection can be supported with moderate overhead.Comment: 14 pages, 9 figures, 10 tables, IEEE Transactions on Intelligent Transportation System

    An identity- and trust-based computational model for privacy

    Get PDF
    The seemingly contradictory need and want of online users for information sharing and privacy has inspired this thesis work. The crux of the problem lies in the fact that a user has inadequate control over the flow (with whom information to be shared), boundary (acceptable usage), and persistence (duration of use) of their personal information. This thesis has built a privacy-preserving information sharing model using context, identity, and trust to manage the flow, boundary, and persistence of disclosed information. In this vein, privacy is viewed as context-dependent selective disclosures of information. This thesis presents the design, implementation, and analysis of a five-layer Identity and Trust based Model for Privacy (ITMP). Context, trust, and identity are the main building blocks of this model. The application layer identifies the counterparts, the purpose of communication, and the information being sought. The context layer determines the context of a communication episode through identifying the role of a partner and assessing the relationship with the partner. The trust layer combines partner and purpose information with the respective context information to determine the trustworthiness of a purpose and a partner. Given that the purpose and the partner have a known level of trustworthiness, the identity layer constructs a contextual partial identity from the user's complete identity. The presentation layer facilitates in disclosing a set of information that is a subset of the respective partial identity. It also attaches expiration (time-to-live) and usage (purpose-to-live) tags into each piece of information before disclosure. In this model, roles and relationships are used to adequately capture the notion of context to address privacy. A role is a set of activities assigned to an actor or expected of an actor to perform. For example, an actor in a learner role is expected to be involved in various learning activities, such as attending lectures, participating in a course discussion, appearing in exams, etc. A relationship involves related entities performing activities involving one another. Interactions between actors can be heavily influenced by roles. For example, in a learning-teaching relationship, both the learner and the teacher are expected to perform their respective roles. The nuances of activities warranted by each role are dictated by individual relationships. For example, two learners seeking help from an instructor are going to present themselves differently. In this model, trust is realized in two forms: trust in partners and trust of purposes. The first form of trust assesses the trustworthiness of a partner in a given context. For example, a stranger may be considered untrustworthy to be given a home phone number. The second form of trust determines the relevance or justification of a purpose for seeking data in a given context. For example, seeking/providing a social insurance number for the purpose of a membership in a student organization is inappropriate. A known and tested trustee can understandably be re-trusted or re-evaluated based on the personal experience of a trustor. In online settings, however, a software manifestation of a trusted persistent public actor, namely a guarantor, is required to help find a trustee, because we interact with a myriad of actors in a large number of contexts, often with no prior relationships. The ITMP model is instantiated as a suite of Role- and Relationship-based Identity and Reputation Management (RRIRM) features in iHelp, an e-learning environment in use at the University of Saskatchewan. This thesis presents the results of a two-phase (pilot and larger-scale) user study that illustrates the effectiveness of the RRIRM features and thus the ITMP model in enhancing privacy through identity and trust management in the iHelp Discussion Forum. This research contributes to the understanding of privacy problems along with other competing interests in the online world, as well as to the development of privacy-enhanced communications through understanding context, negotiating identity, and using trust

    Health Information Systems in the Digital Health Ecosystem—Problems and Solutions for Ethics, Trust and Privacy

    Get PDF
    Digital health information systems (DHIS) are increasingly members of ecosystems, collecting, using and sharing a huge amount of personal health information (PHI), frequently without control and authorization through the data subject. From the data subject's perspective, there is frequently no guarantee and therefore no trust that PHI is processed ethically in Digital Health Ecosystems. This results in new ethical, privacy and trust challenges to be solved. The authors' objective is to find a combination of ethical principles, privacy and trust models, together enabling design, implementation of DHIS acting ethically, being trustworthy, and supporting the user's privacy needs. Research published in journals, conference proceedings, and standards documents is analyzed from the viewpoint of ethics, privacy and trust. In that context, systems theory and systems engineering approaches together with heuristic analysis are deployed. The ethical model proposed is a combination of consequentialism, professional medical ethics and utilitarianism. Privacy enforcement can be facilitated by defining it as health information specific contextual intellectual property right, where a service user can express their own privacy needs using computer-understandable policies. Thereby, privacy as a dynamic, indeterminate concept, and computational trust, deploys linguistic values and fuzzy mathematics. The proposed solution, combining ethical principles, privacy as intellectual property and computational trust models, shows a new way to achieve ethically acceptable, trustworthy and privacy-enabling DHIS and Digital Health Ecosystems

    Privacy, security, and trust issues in smart environments

    Get PDF
    Recent advances in networking, handheld computing and sensor technologies have driven forward research towards the realisation of Mark Weiser's dream of calm and ubiquitous computing (variously called pervasive computing, ambient computing, active spaces, the disappearing computer or context-aware computing). In turn, this has led to the emergence of smart environments as one significant facet of research in this domain. A smart environment, or space, is a region of the real world that is extensively equipped with sensors, actuators and computing components [1]. In effect the smart space becomes a part of a larger information system: with all actions within the space potentially affecting the underlying computer applications, which may themselves affect the space through the actuators. Such smart environments have tremendous potential within many application areas to improve the utility of a space. Consider the potential offered by a smart environment that prolongs the time an elderly or infirm person can live an independent life or the potential offered by a smart environment that supports vicarious learning

    Trust aware system for social networks: A comprehensive survey

    Get PDF
    Social networks are the platform for the users to get connected with other social network users based on their interest and life styles. Existing social networks have millions of users and the data generated by them are huge and it is difficult to differentiate the real users and the fake users. Hence a trust worthy system is recommended for differentiating the real and fake users. Social networking enables users to send friend requests, upload photos and tag their friends and even suggest them the web links based on the interest of the users. The friends recommended, the photos tagged and web links suggested may be a malware or an untrusted activity. Users on social networks are authorised by providing the personal data. This personal raw data is available to all other users online and there is no protection or methods to secure this data from unknown users. Hence to provide a trustworthy system and to enable real users activities a review on different methods to achieve trustworthy social networking systems are examined in this paper

    Trustworthy artificial intelligence

    Get PDF
    Artificial intelligence (AI) brings forth many opportunities to contribute to the wellbeing of individuals and the advancement of economies and societies, but also a variety of novel ethical, legal, social, and technological challenges. Trustworthy AI (TAI) bases on the idea that trust builds the foundation of societies, economies, and sustainable development, and that individuals, organizations, and societies will therefore only ever be able to realize the full potential of AI, if trust can be established in its development, deployment, and use. With this article we aim to introduce the concept of TAI and its five foundational principles (1) beneficence, (2) non-maleficence, (3) autonomy, (4) justice, and (5) explicability. We further draw on these five principles to develop a data-driven research framework for TAI and demonstrate its utility by delineating fruitful avenues for future research, particularly with regard to the distributed ledger technology-based realization of TAI

    A NOVEL FRAMEWORK FOR SOCIAL INTERNET OF THINGS: LEVERAGING THE FRIENDSHIPS AND THE SERVICES EXCHANGED BETWEEN SMART DEVICES

    Get PDF
    As humans, we tackle many problems in complex societies and manage the complexities of networked social systems. Cognition and sociability are two vital human capabilities that improve social life and complex social interactions. Adding these features to smart devices makes them capable of managing complex and networked Internet of Things (IoT) settings. Cognitive and social devices can improve their relationships and connections with other devices and people to better serve human needs. Nowadays, researchers are investigating two future generations of IoT: social IoT (SIoT) and cognitive IoT (CIoT). This study develops a new framework for IoT, called CSIoT, by using complexity science concepts and by integrating social and cognitive IoT concepts. This framework uses a new mechanism to leverage the friendships between devices to address service management, privacy, and security. The framework addresses network navigability, resilience, and heterogeneity between devices in IoT settings. This study uses a new simulation tool for evaluating the new CSIoT framework and evaluates the privacy-preserving ability of CSIoT using the new simulation tool. To address different CSIoT security and privacy issues, this study also proposes a blockchain-based CSIoT. The evaluation results show that CSIoT can effectively preserve the privacy and the blockchain-based CSIoT performs effectively in addressing different privacy and security issues

    Trustworthiness Requirements in Information Systems Design: Lessons Learned from the Blockchain Community

    Get PDF
    In modern society, where digital security is a major preoccupation, the perception of trust is undergoing fundamental transformations. Blockchain community created a substantial body of knowledge on design and development of trustworthy information systems and digital trust. Yet, little research is focused on broader scope and other forms of trust. In this study, we review the research literature reporting on design and development of blockchain solutions and focus on trustworthiness requirements that drive these solutions. Our findings show that digital trust is not the only form of trust that the organizations seek to reenforce: trust in technology and social trust remain powerful drivers in decision making. We analyze 56 primary studies, extract and formulate a set of 21 trustworthiness requirements. While originated from blockchain literature, the formulated requirements are technology-neutral: they aim at supporting business and technology experts in translating their trust issues into specific design decisions and in rationalizing their technological choices. To bridge the gap between social and technological domains, we associate the trustworthiness requirements with three trustworthiness factors defined in the social science: ability, benevolence and integrity

    Process Versus Outcome Accountability

    Get PDF
    Private- and public-sector managers face a recurring organizational-design dilemma: the relative emphasis to place on process-versus-outcome accountability in evaluating employee performance. This chapter reviews experimental-psychological research that emphasizes the benefits of process accountability and then notes blind spots in that literature, especially the insensitivity to the relational signals that accountability manipulations convey about how evaluators view the evaluated. The chapter also examines real-world ideologically-driven debates over accountability design in which partisans tend to favor no-excuses forms of outcome accountability for those deemed untrustworthy and uncertainty-buffering forms of process accountability for the trustworthy. Finally, an integrative framework is proposed that examines how managers can balance the inevitable tradeoffs between decision-making control enhanced under process accountability and innovation fostered under outcome accountability, and sheds light on how agent empowerment can compensate for the deficiencies of both systems
    • …
    corecore