3,596 research outputs found

    CamFlow: Managed Data-sharing for Cloud Services

    Full text link
    A model of cloud services is emerging whereby a few trusted providers manage the underlying hardware and communications whereas many companies build on this infrastructure to offer higher level, cloud-hosted PaaS services and/or SaaS applications. From the start, strong isolation between cloud tenants was seen to be of paramount importance, provided first by virtual machines (VM) and later by containers, which share the operating system (OS) kernel. Increasingly it is the case that applications also require facilities to effect isolation and protection of data managed by those applications. They also require flexible data sharing with other applications, often across the traditional cloud-isolation boundaries; for example, when government provides many related services for its citizens on a common platform. Similar considerations apply to the end-users of applications. But in particular, the incorporation of cloud services within `Internet of Things' architectures is driving the requirements for both protection and cross-application data sharing. These concerns relate to the management of data. Traditional access control is application and principal/role specific, applied at policy enforcement points, after which there is no subsequent control over where data flows; a crucial issue once data has left its owner's control by cloud-hosted applications and within cloud-services. Information Flow Control (IFC), in addition, offers system-wide, end-to-end, flow control based on the properties of the data. We discuss the potential of cloud-deployed IFC for enforcing owners' dataflow policy with regard to protection and sharing, as well as safeguarding against malicious or buggy software. In addition, the audit log associated with IFC provides transparency, giving configurable system-wide visibility over data flows. [...]Comment: 14 pages, 8 figure

    Trustworthy repositories: Audit and certification (TRAC) Cline Library internal audit, spring 2014

    Get PDF
    Audit and Certification of Trustworthy Digital Repositories (TRAC) is a recommended practice developed by the Consultative Committee for Space Data Systems. The TRAC international standard (ISO 16363:2012) provides institutions with guidelines for performing internal audits to evaluate the trustworthiness of digital repositories, and creates a structure to support external certification of repositories. TRAC establishes criteria, evidence, best practices and controls that digital repositories can use to assess their activities in the areas of organizational infrastructure, digital object management, and technical infrastructure and risk management. The Cline Library at Northern Arizona University has undertaken an internal audit based on TRAC in order to evaluate the policies, procedures and workflows of the existing digital archives and to prepare for the development and implementation of the proposed institutional repository. The following document provides an overview of the results and recommendations produced by this internal audit

    A formal approach for network security policy validation

    Get PDF
    Network security is a crucial aspect for administrators due to increasing network size and number of functions and controls (e.g.firewall, DPI, parental control). Errors in configuring security controls may result in serious security breaches and vulnerabilities (e.g. blocking legitimate traffic or permitting unwanted traffic) that must be absolutely detected and addressed. This work proposes a novel approach for validating network policy enforcement, by checking the network status and configuration, and detection of the possible causes in case of misconfiguration or software attacks. Our contribution exploits formal methods to model and validate the packet processing and forwarding behaviour of security controls, and to validate the trustworthiness of the controls by using remote attestation. A prototype implementation of this approach is proposed to validate different scenarios

    Combining behavioural types with security analysis

    Get PDF
    Today's software systems are highly distributed and interconnected, and they increasingly rely on communication to achieve their goals; due to their societal importance, security and trustworthiness are crucial aspects for the correctness of these systems. Behavioural types, which extend data types by describing also the structured behaviour of programs, are a widely studied approach to the enforcement of correctness properties in communicating systems. This paper offers a unified overview of proposals based on behavioural types which are aimed at the analysis of security properties

    Data trust framework using blockchain and smart contracts

    Get PDF
    Lack of trust is the main barrier preventing more widespread data sharing. The lack of transparent and reliable infrastructure for data sharing prevents many data owners from sharing their data. Data trust is a paradigm that facilitates data sharing by forcing data controllers to be transparent about the process of sharing and reusing data. Blockchain technology has the potential to present the essential properties for creating a practical and secure data trust framework by transforming current auditing practices and automatic enforcement of smart contracts logic without relying on intermediaries to establish trust. Blockchain holds an enormous potential to remove the barriers of traditional centralized applications and propose a distributed and transparent administration by employing the involved parties to maintain consensus on the ledger. Furthermore, smart contracts are a programmable component that provides blockchain with more flexible and powerful capabilities. Recent advances in blockchain platforms toward smart contracts' development have revealed the possibility of implementing blockchain-based applications in various domains, such as health care, supply chain and digital identity. This dissertation investigates the blockchain's potential to present a framework for data trust. It starts with a comprehensive study of smart contracts as the main component of blockchain for developing decentralized data trust. Interrelated, three decentralized applications that address data sharing and access control problems in various fields, including healthcare data sharing, business process, and physical access control system, have been developed and examined. In addition, a general-purpose application based on an attribute-based access control model is proposed that can provide trusted auditability required for data sharing and access control systems and, ultimately, a data trust framework. Besides auditing, the system presents a transparency level that both access requesters (data users) and resource owners (data controllers) can benefit from. The proposed solutions have been validated through a use case of independent digital libraries. It also provides a detailed performance analysis of the system implementation. The performance results have been compared based on different consensus mechanisms and databases, indicating the system's high throughput and low latency. Finally, this dissertation presents an end-to-end data trust framework based on blockchain technology. The proposed framework promotes data trustworthiness by assessing input datasets, effectively managing access control, and presenting data provenance and activity monitoring. A trust assessment model that examines the trustworthiness of input data sets and calculates the trust value is presented. The number of transaction validators is defined adaptively with the trust value. This research provides solutions for both data owners and data users’ by ensuring the trustworthiness and quality of the data at origin and transparent and secure usage of the data at the end. A comprehensive experimental study indicates the presented system effectively handles a large number of transactions with low latency

    Analysis of Mobile Networks’ Protocols Based on Abstract State Machines

    Get PDF
    We define MOTION (MOdeling and simulaTIng mObile adhoc Networks), a Java application based on the framework ASMETA (ASM mETAmodeling), that uses the ASM (Abstract State Machine) formalism to model and simulate mobile networks. In particular, the AODV (Ad-hoc On-demand Distance Vector) protocol is used to show the behaviour of the application

    Trust and integrity in distributed systems

    Get PDF
    In the last decades, we have witnessed an exploding growth of the Internet. The massive adoption of distributed systems on the Internet allows users to offload their computing intensive work to remote servers, e.g. cloud. In this context, distributed systems are pervasively used in a number of difference scenarios, such as web-based services that receive and process data, cloud nodes where company data and processes are executed, and softwarised networks that process packets. In these systems, all the computing entities need to trust each other and co-operate in order to work properly. While the communication channels can be well protected by protocols like TLS or IPsec, the problem lies in the expected behaviour of the remote computing platforms, because they are not under the direct control of end users and do not offer any guarantee that they will behave as agreed. For example, the remote party may use non-legitimate services for its own convenience (e.g. illegally storing received data and routed packets), or the remote system may misbehave due to an attack (e.g. changing deployed services). This is especially important because most of these computing entities need to expose interfaces towards the Internet, which makes them easier to be attacked. Hence, software-based security solutions alone are insufficient to deal with the current scenario of distributed systems. They must be coupled with stronger means such as hardware-assisted protection. In order to allow the nodes in distributed system to trust each other, their integrity must be presented and assessed to predict their behaviour. The remote attestation technique of trusted computing was proposed to specifically deal with the integrity issue of remote entities, e.g. whether the platform is compromised with bootkit attacks or cracked kernel and services. This technique relies on a hardware chip called Trusted Platform Module (TPM), which is available in most business class laptops, desktops and servers. The TPM plays as the hardware root of trust, which provides a special set of capabilities that allows a physical platform to present its integrity state. With a TPM equipped in the motherboard, the remote attestation is the procedure that a physical node provides hardware-based proof of the software components loaded in this platform, which can be evaluated by other entities to conclude its integrity state. Thanks to the hardware TPM, the remote attestation procedure is resistant to software attacks. However, even though the availability of this chip is high, its actual usage is low. The major reason is that trusted computing has very little flexibility, since its goal is to provide strong integrity guarantees. For instance, remote attestation result is positive if and only if the software components loaded in the platform are expected and loaded in a specific order, which limits its applicability in real-world scenarios. For such reasons, this technique is especially hard to be applied on software services running in application layer, that are loaded in random order and constantly updated. Because of this, current remote attestation techniques provide incomplete solution. They only focus on the boot phase of physical platforms but not on the services, not to mention the services running in virtual instances. This work first proposes a new remote attestation framework with the capability of presenting and evaluating the integrity state not only of the boot phase of physical platforms but also of software services at load time, e.g. whether the software is legitimate or not. The framework allows users to know and understand the integrity state of the whole life cycle of the services they are interacting with, thus the users can make informed decision whether to send their data or trust the received results. Second, based on the remote attestation framework this thesis proposes a method to bind the identity of secure channel endpoint to a specific physical platform and its integrity state. Secure channels are extensively adopted in distributed systems to protect data transmitted from one platform to another. However, they do not convey any information about the integrity state of the platform or the service that generates and receives this data, which leaves ample space for various attacks. With the binding of the secure channel endpoint and the hardware TPM, users are protected from relay attacks (with hardware-based identity) and malicious or cracked platform and software (with remote attestation). Third, with the help of the remote attestation framework, this thesis introduces a new method to include the integrity state of software services running in virtual containers in the evidence generated by the hardware TPM. This solution is especially important for softwarised network environments. Softwarised network was proposed to provide dynamic and flexible network deployment which is an ever complex task nowadays. Its main idea is to switch hardware appliances to softwarised network functions running inside virtual instances, that are full-fledged computational systems and accessible from the Internet, thus their integrity is at stake. Unfortunately, currently remote attestation work is not able to provide hardware-based integrity evidence for software services running inside virtual instances, because the direct link between the internal of virtual instances and hardware root of trust is missing. With the solution proposed in this thesis, the integrity state of the softwarised network functions running in virtual containers can be presented and evaluated with hardware-based evidence, implying the integrity of the whole softwarised network. The proposed remote attestation framework, trusted channel and trusted softwarised network are implemented in separate working prototypes. Their performance was evaluated and proved to be excellent, allowing them to be applied in real-world scenarios. Moreover, the implementation also exposes various APIs to simplify future integration with different management platforms, such as OpenStack and OpenMANO

    Survey On Ensuring Distributed Accountability for Data Sharing in the Cloud

    Get PDF
    Cloud computing is the use of computing of sources that are delivered as a service over a network for example on internet. It enables highly scalable services to be easily utilized over the Internet on an as needed basis. Important characteristic of the cloud services is that users’ data are usually processed remotely in unknown machines that users do not operate. It can become a substantial barrier to the wide taking on cloud services. To address this problem highly decentralized responsibility framework to keep track of the actual usage of the user’s data in the cloud. In this work has automated logging and distributed auditing mechanism. The Cloud Information Accountability framework proposed in this work conducts distributed auditing of relevant access performed by any entity, carried out at any point of time at any cloud service provider. It conations two major elements: logger and log harmonizer. This methodology will also take concern of the JAR file by converting the JAR into obfuscated code which will adds an additional layer of security to the infrastructure. Rather than this here in this work, increase the security of user’s data by provable data control for integrity verificatio

    Framework for Security Transparency in Cloud Computing

    Get PDF
    The migration of sensitive data and applications from the on-premise data centre to a cloud environment increases cyber risks to users, mainly because the cloud environment is managed and maintained by a third-party. In particular, the partial surrender of sensitive data and application to a cloud environment creates numerous concerns that are related to a lack of security transparency. Security transparency involves the disclosure of information by cloud service providers about the security measures being put in place to protect assets and meet the expectations of customers. It establishes trust in service relationship between cloud service providers and customers, and without evidence of continuous transparency, trust and confidence are affected and are likely to hinder extensive usage of cloud services. Also, insufficient security transparency is considered as an added level of risk and increases the difficulty of demonstrating conformance to customer requirements and ensuring that the cloud service providers adequately implement security obligations. The research community have acknowledged the pressing need to address security transparency concerns, and although technical aspects for ensuring security and privacy have been researched widely, the focus on security transparency is still scarce. The relatively few literature mostly approach the issue of security transparency from cloud providers’ perspective, while other works have contributed feasible techniques for comparison and selection of cloud service providers using metrics such as transparency and trustworthiness. However, there is still a shortage of research that focuses on improving security transparency from cloud users’ point of view. In particular, there is still a gap in the literature that (i) dissects security transparency from the lens of conceptual knowledge up to implementation from organizational and technical perspectives and; (ii) support continuous transparency by enabling the vetting and probing of cloud service providers’ conformity to specific customer requirements. The significant growth in moving business to the cloud – due to its scalability and perceived effectiveness – underlines the dire need for research in this area. This thesis presents a framework that comprises the core conceptual elements that constitute security transparency in cloud computing. It contributes to the knowledge domain of security transparency in cloud computing by proposing the following. Firstly, the research analyses the basics of cloud security transparency by exploring the notion and foundational concepts that constitute security transparency. Secondly, it proposes a framework which integrates various concepts from requirement engineering domain and an accompanying process that could be followed to implement the framework. The framework and its process provide an essential set of conceptual ideas, activities and steps that can be followed at an organizational level to attain security transparency, which are based on the principles of industry standards and best practices. Thirdly, for ensuring continuous transparency, the thesis proposes an essential tool that supports the collection and assessment of evidence from cloud providers, including the establishment of remedial actions for redressing deficiencies in cloud provider practices. The tool serves as a supplementary component of the proposed framework that enables continuous inspection of how predefined customer requirements are being satisfied. The thesis also validates the proposed security transparency framework and tool in terms of validity, applicability, adaptability, and acceptability using two different case studies. Feedbacks are collected from stakeholders and analysed using essential criteria such as ease of use, relevance, usability, etc. The result of the analysis illustrates the validity and acceptability of both the framework and tool in enhancing security transparency in a real-world environment
    corecore