430 research outputs found

    Towards a cloud-based integrity measurement service

    Get PDF

    Fine-Grained Provenance And Applications To Data Analytics Computation

    Get PDF
    Data provenance tools seek to facilitate reproducible data science and auditable data analyses by capturing the analytics steps used in generating data analysis results. However, analysts must choose among workflow provenance systems, which allow arbitrary code but only track provenance at the granularity of files; prove-nance APIs, which provide tuple-level provenance, but incur overhead in all computations; and database provenance tools, which track tuple-level provenance through relational operators and support optimization, but support a limited subset of data science tasks. None of these solutions are well suited for tracing errors introduced during common ETL, record alignment, and matching tasks – for data types such as strings, images, etc.Additionally, we need a provenance archival layer to store and manage the tracked fine-grained prove-nance that enables future sophisticated reasoning about why individual output results appear or fail to appear. For reproducibility and auditing, the provenance archival system should be tamper-resistant. On the other hand, the provenance collecting over time or within the same query computation tends to be repeated partially (i.e., the same operation with the same input records in the middle computation step). Hence, we desire efficient provenance storage (i.e., it compresses repeated results). We address these challenges with novel formalisms and algorithms, implemented in the PROVision system, for reconstructing fine-grained provenance for a broad class of ETL-style workflows. We extend database-style provenance techniques to capture equivalences, support optimizations, and enable lazy evaluations. We develop solutions for storing fine-grained provenance in relational storage systems while both compressing and protecting it via cryptographic hashes. We experimentally validate our proposed solutions using both scientific and OLAP workloads

    Speculative Virtual Verification: Policy-Constrained Speculative Execution

    Get PDF
    A key problem facing current computing systems is the inability to autonomously manage security vulnerabilities as well as more mundane errors. Since the design of computer architectures is usually performance-driven, hardware often lacks primitives for tasks in which raw speed is not the primary goal. There is little architectural support for monitoring execution at the instruction level, and no mechanisms for assisting an automated response. This paper advocates modifying general-purpose processors to provide both program supervision and automatic response via a policy-driven monitoring mechanism and instruction stream rewriting, respectively. These capabilities form the basis of speculative virtual verification (SVV).SVV is a model for the speculative execution of code based on high-level security and safety constraints. We introduce architectural enhancements to support this framework, including the ability to supply an automated response by rewriting the instruction stream. Finally, given the novelty of the SVV approach to executing software, we briefly consider some important challenges for SVV-based systems

    Risk and threat mitigation techniques in internet of things (IoT) environments: a survey

    Get PDF
    Security in the Internet of Things (IoT) remains a predominant area of concern. Although several other surveys have been published on this topic in recent years, the broad spectrum that this area aims to cover, the rapid developments and the variety of concerns make it impossible to cover the topic adequately. This survey updates the state of the art covered in previous surveys and focuses on defences and mitigations against threats rather than on the threats alone, an area that is less extensively covered by other surveys. This survey has collated current research considering the dynamicity of the IoT environment, a topic missed in other surveys and warrants particular attention. To consider the IoT mobility, a life-cycle approach is adopted to the study of dynamic and mobile IoT environments and means of deploying defences against malicious actors aiming to compromise an IoT network and to evolve their attack laterally within it and from it. This survey takes a more comprehensive and detailed step by analysing a broad variety of methods for accomplishing each of the mitigation steps, presenting these uniquely by introducing a “defence-in-depth” approach that could significantly slow down the progress of an attack in the dynamic IoT environment. This survey sheds a light on leveraging redundancy as an inherent nature of multi-sensor IoT applications, to improve integrity and recovery. This study highlights the challenges of each mitigation step, emphasises novel perspectives, and reconnects the discussed mitigation steps to the ground principles they seek to implement

    Hardening High-Assurance Security Systems with Trusted Computing

    Get PDF
    We are living in the time of the digital revolution in which the world we know changes beyond recognition every decade. The positive aspect is that these changes also drive the progress in quality and availability of digital assets crucial for our societies. To name a few examples, these are broadly available communication channels allowing quick exchange of knowledge over long distances, systems controlling automatic share and distribution of renewable energy in international power grid networks, easily accessible applications for early disease detection enabling self-examination without burdening the health service, or governmental systems assisting citizens to settle official matters without leaving their homes. Unfortunately, however, digitalization also opens opportunities for malicious actors to threaten our societies if they gain control over these assets after successfully exploiting vulnerabilities in the complex computing systems building them. Protecting these systems, which are called high-assurance security systems, is therefore of utmost importance. For decades, humanity has struggled to find methods to protect high-assurance security systems. The advancements in the computing systems security domain led to the popularization of hardware-assisted security techniques, nowadays available in commodity computers, that opened perspectives for building more sophisticated defense mechanisms at lower costs. However, none of these techniques is a silver bullet. Each one targets particular use cases, suffers from limitations, and is vulnerable to specific attacks. I argue that some of these techniques are synergistic and help overcome limitations and mitigate specific attacks when used together. My reasoning is supported by regulations that legally bind high-assurance security systems' owners to provide strong security guarantees. These requirements can be fulfilled with the help of diverse technologies that have been standardized in the last years. In this thesis, I introduce new techniques for hardening high-assurance security systems that execute in remote execution environments, such as public and hybrid clouds. I implemented these techniques as part of a framework that provides technical assurance that high-assurance security systems execute in a specific data center, on top of a trustworthy operating system, in a virtual machine controlled by a trustworthy hypervisor or in strong isolation from other software. I demonstrated the practicality of my approach by leveraging the framework to harden real-world applications, such as machine learning applications in the eHealth domain. The evaluation shows that the framework is practical. It induces low performance overhead (<6%), supports software updates, requires no changes to the legacy application's source code, and can be tailored to individual trust boundaries with the help of security policies. The framework consists of a decentralized monitoring system that offers better scalability than traditional centralized monitoring systems. Each monitored machine runs a piece of code that verifies that the machine's integrity and geolocation conform to the given security policy. This piece of code, which serves as a trusted anchor on that machine, executes inside the trusted execution environment, i.e., Intel SGX, to protect itself from the untrusted host, and uses trusted computing techniques, such as trusted platform module, secure boot, and integrity measurement architecture, to attest to the load-time and runtime integrity of the surrounding operating system running on a bare metal machine or inside a virtual machine. The trusted anchor implements my novel, formally proven protocol, enabling detection of the TPM cuckoo attack. The framework also implements a key distribution protocol that, depending on the individual security requirements, shares cryptographic keys only with high-assurance security systems executing in the predefined security settings, i.e., inside the trusted execution environments or inside the integrity-enforced operating system. Such an approach is particularly appealing in the context of machine learning systems where some algorithms, like the machine learning model training, require temporal access to large computing power. These algorithms can execute inside a dedicated, trusted data center at higher performance because they are not limited by security features required in the shared execution environment. The evaluation of the framework showed that training of a machine learning model using real-world datasets achieved 0.96x native performance execution on the GPU and a speedup of up to 1560x compared to the state-of-the-art SGX-based system. Finally, I tackled the problem of software updates, which makes the operating system's integrity monitoring unreliable due to false positives, i.e., software updates move the updated system to an unknown (untrusted) state that is reported as an integrity violation. I solved this problem by introducing a proxy to a software repository that sanitizes software packages so that they can be safely installed. The sanitization consists of predicting and certifying the future (after the specific updates are installed) operating system's state. The evaluation of this approach showed that it supports 99.76% of the packages available in Alpine Linux main and community repositories. The framework proposed in this thesis is a step forward in verifying and enforcing that high-assurance security systems execute in an environment compliant with regulations. I anticipate that the framework might be further integrated with industry-standard security information and event management tools as well as other security monitoring mechanisms to provide a comprehensive solution hardening high-assurance security systems

    Trusted Provenance with Blockchain - A Blockchain-based Provenance Tracking System for Virtual Aircraft Component Manufacturing

    Get PDF
    The importance of provenance in the digital age has led to significant interest in utilizing blockchain technology for tamper-proof storage of provenance data. This thesis proposes a blockchain-based provenance tracking system for the certification of aircraft components. The aim is to design and implement a system that can ensure the trustworthy, tamper-resistant storage of provenance documents originating from an aircraft manufacturing process. To achieve this, the thesis presents a systematic literature review, which provides a comprehensive overview of existing works in the field of provenance and blockchain technology. After obtaining strategies to utilize blockchain for the storage of provenance data on the blockchain, a system was designed to meet the requirements of stakeholders in the aviation industry. The thesis utilized a systematic approach to gather requirements by conducting interviews with stakeholders. The system was implemented using a combination of smart contracts and a graphical user interface to provide tamper-resistant, traceable storage of relevant data on a transparent blockchain. An evaluation based on the requirements identified during the requirement engineering process found that the proposed system meets all identified requirements. Overall, this thesis offers insight into a potential application of blockchain technology in the aviation industry and provides a valuable resource for researchers and industry professionals seeking to leverage blockchain technology for provenance tracking and certification purpose
    • …
    corecore