197 research outputs found

    Virtual TPM for a secure cloud: fallacy or reality?

    Get PDF
    The cloud technology has dramatically increased the virtualisation usage during the last years. Nevertheless, the virtualisation has also imposed some challenges on the security of the cloud. A remarkable case is in the usage of cryptographic hardware such as the Trusted Platform Module (TPM). A TPM is a device, physically attached to a server, that provides several cryptographic functionalities to offer a foundation of trust for the running software. Unfortunately, the virtualisation of the TPM to bring its security properties to virtual environments is not direct due to its design and security constraints. During the last years several proposals have been presented to solve the virtualisation of the TPM. Nevertheless, the virtualisation systems have not started to adopt them until very recently. This paper reviews three existing implementations of virtual TPM in the Xen and QEMU virtualisation solutions. The main contribution of the paper is an analysis of these solutions from a security perspective.This work has been co-funded by the project Trusted Cloud IPT-2011-1166-430000 of the Ministry of Economy and Competitiveness (MINECO) and the European Fund for Regional Development (FEDER)”

    Trusted Computing and Secure Virtualization in Cloud Computing

    Get PDF
    Large-scale deployment and use of cloud computing in industry is accompanied and in the same time hampered by concerns regarding protection of data handled by cloud computing providers. One of the consequences of moving data processing and storage off company premises is that organizations have less control over their infrastructure. As a result, cloud service (CS) clients must trust that the CS provider is able to protect their data and infrastructure from both external and internal attacks. Currently however, such trust can only rely on organizational processes declared by the CS provider and can not be remotely verified and validated by an external party. Enabling the CS client to verify the integrity of the host where the virtual machine instance will run, as well as to ensure that the virtual machine image has not been tampered with, are some steps towards building trust in the CS provider. Having the tools to perform such verifications prior to the launch of the VM instance allows the CS clients to decide in runtime whether certain data should be stored- or calculations should be made on the VM instance offered by the CS provider. This thesis combines three components -- trusted computing, virtualization technology and cloud computing platforms -- to address issues of trust and security in public cloud computing environments. Of the three components, virtualization technology has had the longest evolution and is a cornerstone for the realization of cloud computing. Trusted computing is a recent industry initiative that aims to implement the root of trust in a hardware component, the trusted platform module. The initiative has been formalized in a set of specifications and is currently at version 1.2. Cloud computing platforms pool virtualized computing, storage and network resources in order to serve a large number of customers customers that use a multi-tenant multiplexing model to offer on-demand self-service over broad network. Open source cloud computing platforms are, similar to trusted computing, a fairly recent technology in active development. The issue of trust in public cloud environments is addressed by examining the state of the art within cloud computing security and subsequently addressing the issues of establishing trust in the launch of a generic virtual machine in a public cloud environment. As a result, the thesis proposes a trusted launch protocol that allows CS clients to verify and ensure the integrity of the VM instance at launch time, as well as the integrity of the host where the VM instance is launched. The protocol relies on the use of Trusted Platform Module (TPM) for key generation and data protection. The TPM also plays an essential part in the integrity attestation of the VM instance host. Along with a theoretical, platform-agnostic protocol, the thesis also describes a detailed implementation design of the protocol using the OpenStack cloud computing platform. In order the verify the implementability of the proposed protocol, a prototype implementation has built using a distributed deployment of OpenStack. While the protocol covers only the trusted launch procedure using generic virtual machine images, it presents a step aimed to contribute towards the creation of a secure and trusted public cloud computing environment

    Trusted execution: applications and verification

    Get PDF
    Useful security properties arise from sealing data to specific units of code. Modern processors featuring Intel’s TXT and AMD’s SVM achieve this by a process of measured and trusted execution. Only code which has the correct measurement can access the data, and this code runs in an environment trusted from observation and interference. We discuss the history of attempts to provide security for hardware platforms, and review the literature in the field. We propose some applications which would benefit from use of trusted execution, and discuss functionality enabled by trusted execution. We present in more detail a novel variation on Diffie-Hellman key exchange which removes some reliance on random number generation. We present a modelling language with primitives for trusted execution, along with its semantics. We characterise an attacker who has access to all the capabilities of the hardware. In order to achieve automatic analysis of systems using trusted execution without attempting to search a potentially infinite state space, we define transformations that reduce the number of times the attacker needs to use trusted execution to a pre-determined bound. Given reasonable assumptions we prove the soundness of the transformation: no secrecy attacks are lost by applying it. We then describe using the StatVerif extensions to ProVerif to model the bounded invocations of trusted execution. We show the analysis of realistic systems, for which we provide case studies

    Design and implementation of a trust calculation method for network components

    Get PDF
    Today’s organizations rely on internal or cloud-infrastructures to manage their data and their products. Due to the increasing importance and complexity of these infrastructures, there is the need to implement a reliable way to monitor the trustworthiness of the devices that are part of it. It is important to establish trust within the nodes of a single or multiple security domains to enhance the security of an enterprise’s infrastructure. This thesis aims to develop and evaluate a method to measure and calculate a trust score for each node and security domain of a network infrastructure. This method will be based on a centralized verifier that collects and verifies all the security and performance-based evidence from the nodes that compose the infrastructure. The evidence verification process is based on remote attestation through the use of a hardware root of trust. Moreover, this method allows the exchange of trust scores with other security domains: this enhances inter-domain communication trustworthiness. The main advantages of this method compared to similar ones found in the literature are the possibility of an inter-domain trust exchange, the use of remote attestation, and its adaptability to work with different kinds of infrastructure. Furthermore, the tests confirmed that the method responds quickly in case of a vulnerable node

    Software-based and regionally-oriented traffic management in Networks-on-Chip

    Get PDF
    Since the introduction of chip-multiprocessor systems, the number of integrated cores has been steady growing and workload applications have been adapted to exploit the increasing parallelism. This changed the importance of efficient on-chip communication significantly and the infrastructure has to keep step with these new requirements. The work at hand makes significant contributions to the state-of-the-art of the latest generation of such solutions, called Networks-on-Chip, to improve the performance, reliability, and flexible management of these on-chip infrastructures

    A Flexible Ultralight Hardware Security Module for EPC RFID Tags

    Get PDF
    Due to the rapid growth of using Internet of Things (IoT) devices in daily life, the need to achieve an acceptable level of security and privacy for these devices is rising. Security risks may include privacy threats like gaining sensitive information from a device, and authentication problems from counterfeit or cloned devices. It is more challenging to add security features to extremely constrained devices, such as passive Electronic Product Code (EPC) Radio Frequency Identification (RFID) tags, compared to devices that have more computational and storage capabilities. EPC RFID tags are simple and low-cost electronic circuits that are commonly used in supply chains, retail stores, and other applications to identify physical objects. Most tags today are simple "license plates" that just identify the object they are attached to and have minimal security. Due to the security risks of new applications, there is an important need to implement secure RFID tags. Examples of the security risks for these applications include unauthorized physical tracking and inventorying of tags. The current commercial RFID tag designs use specialised hardware circuits approach. This approach can achieve the lowest area and power consumption; however, it lacks flexibility. This thesis presents an optimized application-specific instruction set architecture (ISA) for an ultralight Hardware Security Module (HSM). HSMs are computing devices that protect cryptographic keys and operations for a device. The HSM combines all security-related functions for passive RFID tag. The goal of this research is to demonstrate that using an application-specific instruction set processor (ASIP) architecture for ultralight HSMs provides benefits in terms of trade-offs between flexibility, extensibility, and efficiency. Our novel application specific instruction-set architecture allows flexibility on many design levels and achieves acceptable security level for passive EPC RFID tag. Our solution moves a major design effort from hardware to software, which largely reduces the final unit cost. Our ASIP processor can be implemented with 4,662 gate equivalent units (GEs) for 65 nm CMOS technology excluding cryptographic units and memories. We integrated and analysed four cryptographic modules: AES and Simeck block ciphers, WG-5 stream cipher, and ACE authenticated encryption module. Our HSM achieves very good efficiencies for both block and stream ciphers. Specifically for the AES cipher, we improve over a previous programmable AES implementation result by 32x. We increase performance dramatically and increase/decrease area by 17.97/17.14% respectively. These results fulfill the requirements of extremely constrained devices and allow the inclusion of cryptographic units into the datapath of our ASIP processor

    Hardening High-Assurance Security Systems with Trusted Computing

    Get PDF
    We are living in the time of the digital revolution in which the world we know changes beyond recognition every decade. The positive aspect is that these changes also drive the progress in quality and availability of digital assets crucial for our societies. To name a few examples, these are broadly available communication channels allowing quick exchange of knowledge over long distances, systems controlling automatic share and distribution of renewable energy in international power grid networks, easily accessible applications for early disease detection enabling self-examination without burdening the health service, or governmental systems assisting citizens to settle official matters without leaving their homes. Unfortunately, however, digitalization also opens opportunities for malicious actors to threaten our societies if they gain control over these assets after successfully exploiting vulnerabilities in the complex computing systems building them. Protecting these systems, which are called high-assurance security systems, is therefore of utmost importance. For decades, humanity has struggled to find methods to protect high-assurance security systems. The advancements in the computing systems security domain led to the popularization of hardware-assisted security techniques, nowadays available in commodity computers, that opened perspectives for building more sophisticated defense mechanisms at lower costs. However, none of these techniques is a silver bullet. Each one targets particular use cases, suffers from limitations, and is vulnerable to specific attacks. I argue that some of these techniques are synergistic and help overcome limitations and mitigate specific attacks when used together. My reasoning is supported by regulations that legally bind high-assurance security systems' owners to provide strong security guarantees. These requirements can be fulfilled with the help of diverse technologies that have been standardized in the last years. In this thesis, I introduce new techniques for hardening high-assurance security systems that execute in remote execution environments, such as public and hybrid clouds. I implemented these techniques as part of a framework that provides technical assurance that high-assurance security systems execute in a specific data center, on top of a trustworthy operating system, in a virtual machine controlled by a trustworthy hypervisor or in strong isolation from other software. I demonstrated the practicality of my approach by leveraging the framework to harden real-world applications, such as machine learning applications in the eHealth domain. The evaluation shows that the framework is practical. It induces low performance overhead (<6%), supports software updates, requires no changes to the legacy application's source code, and can be tailored to individual trust boundaries with the help of security policies. The framework consists of a decentralized monitoring system that offers better scalability than traditional centralized monitoring systems. Each monitored machine runs a piece of code that verifies that the machine's integrity and geolocation conform to the given security policy. This piece of code, which serves as a trusted anchor on that machine, executes inside the trusted execution environment, i.e., Intel SGX, to protect itself from the untrusted host, and uses trusted computing techniques, such as trusted platform module, secure boot, and integrity measurement architecture, to attest to the load-time and runtime integrity of the surrounding operating system running on a bare metal machine or inside a virtual machine. The trusted anchor implements my novel, formally proven protocol, enabling detection of the TPM cuckoo attack. The framework also implements a key distribution protocol that, depending on the individual security requirements, shares cryptographic keys only with high-assurance security systems executing in the predefined security settings, i.e., inside the trusted execution environments or inside the integrity-enforced operating system. Such an approach is particularly appealing in the context of machine learning systems where some algorithms, like the machine learning model training, require temporal access to large computing power. These algorithms can execute inside a dedicated, trusted data center at higher performance because they are not limited by security features required in the shared execution environment. The evaluation of the framework showed that training of a machine learning model using real-world datasets achieved 0.96x native performance execution on the GPU and a speedup of up to 1560x compared to the state-of-the-art SGX-based system. Finally, I tackled the problem of software updates, which makes the operating system's integrity monitoring unreliable due to false positives, i.e., software updates move the updated system to an unknown (untrusted) state that is reported as an integrity violation. I solved this problem by introducing a proxy to a software repository that sanitizes software packages so that they can be safely installed. The sanitization consists of predicting and certifying the future (after the specific updates are installed) operating system's state. The evaluation of this approach showed that it supports 99.76% of the packages available in Alpine Linux main and community repositories. The framework proposed in this thesis is a step forward in verifying and enforcing that high-assurance security systems execute in an environment compliant with regulations. I anticipate that the framework might be further integrated with industry-standard security information and event management tools as well as other security monitoring mechanisms to provide a comprehensive solution hardening high-assurance security systems
    • …
    corecore