3 research outputs found

    Tree-formed Verification Data for Trusted Platforms

    Full text link
    The establishment of trust relationships to a computing platform relies on validation processes. Validation allows an external entity to build trust in the expected behaviour of the platform based on provided evidence of the platform's configuration. In a process like remote attestation, the 'trusted' platform submits verification data created during a start up process. These data consist of hardware-protected values of platform configuration registers, containing nested measurement values, e.g., hash values, of loaded or started components. Commonly, the register values are created in linear order by a hardware-secured operation. Fine-grained diagnosis of components, based on the linear order of verification data and associated measurement logs, is not optimal. We propose a method to use tree-formed verification data to validate a platform. Component measurement values represent leaves, and protected registers represent roots of a hash tree. We describe the basic mechanism of validating a platform using tree-formed measurement logs and root registers and show an logarithmic speed-up for the search of faults. Secure creation of a tree is possible using a limited number of hardware-protected registers and a single protected operation. In this way, the security of tree-formed verification data is maintained.Comment: 15 pages, 11 figures, v3: Reference added, v4: Revised, accepted for publication in Computers and Securit

    Data privacy threat modelling for autonomous systems: a survey from the GDPR’s perspective

    Get PDF
    Artificial Intelligence-based applications have been increasingly deployed in every field of life including smart homes, smart cities, healthcare services, and autonomous systems where personal data is collected across heterogeneous sources and processed using ”black-box” algorithms in opaque centralised servers. As a consequence, preserving the data privacy and security of these applications is of utmost importance. In this respect, a modelling technique for identifying potential data privacy threats and specifying countermeasures to mitigate the related vulnerabilities in such AI-based systems plays a significant role in preserving and securing personal data. Various threat modelling techniques have been proposed such as STRIDE, LINDDUN, and PASTA but none of them is sufficient to model the data privacy threats in autonomous systems. Furthermore, they are not designed to model compliance with data protection legislation like the EU/UK General Data Protection Regulation (GDPR), which is fundamental to protecting data owners' privacy as well as to preventing personal data from potential privacy-related attacks. In this article, we survey the existing threat modelling techniques for data privacy threats in autonomous systems and then analyse such techniques from the viewpoint of GDPR compliance. Following the analysis, We employ STRIDE and LINDDUN in autonomous cars, a specific use-case of autonomous systems, to scrutinise the challenges and gaps of the existing techniques when modelling data privacy threats. Prospective research directions for refining data privacy threats & GDPR-compliance modelling techniques for autonomous systems are also presented

    Modelling And Reasoning About Trust Relationships In The Development Of Trustworthy Information Systems

    Get PDF
    Trustworthy information systems are information systems that fulfill all the functional and non-functional requirements. To this end, all the components of an information system, either human or technical, need to collaborate in order to meet its requirements and achieve its goals. This entails that system components will show the desired or expected behaviour once the system is put in operation. However, modern information systems include a great number of components that can behave in a very unpredictable way. This unpredictability of the behaviour of the system components is a major challenge to the development of trustworthy information systems and more particularly during the modelling stage. When a system component is modelled as part of a requirements engineering model it creates an uncertainty about its future behaviour, thus undermining the accuracy of the system model and eventually the system trustworthiness. Therefore, the addition of system components inevitably is based on assumptions of their future behaviour. Such assumptions are underlying the development of a system and usually are assumptions of trust by the system developer about her trust relationships with the system components, which are instantly formed when a component is inserted into a requirements engineering model of a system. However, despite the importance of such issues, a requirements engineering methodology that explicitly captures such trust relationships along with the entailing trust assumptions and trustworthiness requirements is still missing. For tackling the preceding problems, the thesis proposes a requirements engineering methodology, namely JTrust (Justifying Trust) for developing trustworthy information systems. The methodology is founded upon the notions of trust and control as the means of confidence achievement. In order to develop an information system the developer needs to consider her trust relationships with the system components that are formed with their addition in a system model, reason about them, and proceed to a justified decision about the design of the system. If the system component cannot be trusted to behave in a desired or expected way then the question of what are the alternatives in order to build confidence in the future behaviour of the system component raises. To answer this question we define a new class of requirements, namely trustworthiness requirements. Trustworthiness requirements prescribe the functionality of the software included in the information system that compels the rest of the information system components to behave in a desired or expected way. The proposed methodology consists of: (i) a modelling language which contains trust i and control abstractions; (ii) and a methodological process for capturing and reasoning about trust relationships, modelling and analysing trustworthiness requirements, and assessing the system trustworthiness at a requirements stage. The methodology is accompanied by a CASE tool to support it. To evaluate our proposal, we have applied our methodology to a case study, and we carried out a survey to get feedback from experts. The topic of the case study was the e-health care system of the National Health Service in England, which was used to reason about trust relationships with system components and identify trustworthiness requirements. Researchers from three academic institutions across Europe and from one industrial company, British Telecom, have participated in the survey in order to provide valuable feedback about the effectiveness and efficiency of the methodology. The results conclude that JTrust is useful and easy to use in modelling and reasoning about trust relationships, modelling and analysing trustworthiness requirements and assessing the system trustworthiness at a requirements level
    corecore