36 research outputs found

    Trusted Computing and Secure Virtualization in Cloud Computing

    Get PDF
    Large-scale deployment and use of cloud computing in industry is accompanied and in the same time hampered by concerns regarding protection of data handled by cloud computing providers. One of the consequences of moving data processing and storage off company premises is that organizations have less control over their infrastructure. As a result, cloud service (CS) clients must trust that the CS provider is able to protect their data and infrastructure from both external and internal attacks. Currently however, such trust can only rely on organizational processes declared by the CS provider and can not be remotely verified and validated by an external party. Enabling the CS client to verify the integrity of the host where the virtual machine instance will run, as well as to ensure that the virtual machine image has not been tampered with, are some steps towards building trust in the CS provider. Having the tools to perform such verifications prior to the launch of the VM instance allows the CS clients to decide in runtime whether certain data should be stored- or calculations should be made on the VM instance offered by the CS provider. This thesis combines three components -- trusted computing, virtualization technology and cloud computing platforms -- to address issues of trust and security in public cloud computing environments. Of the three components, virtualization technology has had the longest evolution and is a cornerstone for the realization of cloud computing. Trusted computing is a recent industry initiative that aims to implement the root of trust in a hardware component, the trusted platform module. The initiative has been formalized in a set of specifications and is currently at version 1.2. Cloud computing platforms pool virtualized computing, storage and network resources in order to serve a large number of customers customers that use a multi-tenant multiplexing model to offer on-demand self-service over broad network. Open source cloud computing platforms are, similar to trusted computing, a fairly recent technology in active development. The issue of trust in public cloud environments is addressed by examining the state of the art within cloud computing security and subsequently addressing the issues of establishing trust in the launch of a generic virtual machine in a public cloud environment. As a result, the thesis proposes a trusted launch protocol that allows CS clients to verify and ensure the integrity of the VM instance at launch time, as well as the integrity of the host where the VM instance is launched. The protocol relies on the use of Trusted Platform Module (TPM) for key generation and data protection. The TPM also plays an essential part in the integrity attestation of the VM instance host. Along with a theoretical, platform-agnostic protocol, the thesis also describes a detailed implementation design of the protocol using the OpenStack cloud computing platform. In order the verify the implementability of the proposed protocol, a prototype implementation has built using a distributed deployment of OpenStack. While the protocol covers only the trusted launch procedure using generic virtual machine images, it presents a step aimed to contribute towards the creation of a secure and trusted public cloud computing environment

    A novel architecture to virtualise a hardware-bound trusted platform module

    Get PDF
    Security and trust are particularly relevant in modern softwarised infrastructures, such as cloud environments, as applications are deployed on platforms owned by third parties, are publicly accessible on the Internet and can share the hardware with other tenants. Traditionally, operating systems and applications have leveraged hardware tamper-proof chips, such as the Trusted Platform Modules (TPMs) to implement security workflows, such as remote attestation, and to protect sensitive data against software attacks. This approach does not easily translate to the cloud environment, wherein the isolation provided by the hypervisor makes it impractical to leverage the hardware root of trust in the virtual domains. Moreover, the scalability needs of the cloud often collide with the scarce hardware resources and inherent limitations of TPMs. For this reason, existing implementations of virtual TPMs (vTPMs) are based on TPM emulators. Although more flexible and scalable, this approach is less secure. In fact, each vTPM is vulnerable to software attacks both at the virtualised and hypervisor levels. In this work, we propose a novel design for vTPMs that provides a binding to an underlying physical TPM; the new design, akin to a virtualisation extension for TPMs, extends the latest TPM 2.0 specification. We minimise the number of required additions to the TPM data structures and commands so that they do not require a new, non-backwards compatible version of the specification. Moreover, we support migration of vTPMs among TPM-equipped hosts, as this is considered a key feature in a highly virtualised environment. Finally, we propose a flexible approach to vTPM object creation that protects vTPM secrets either in hardware or software, depending on the required level of assurance

    Trusted computing or trust in computing? Legislating for trust networks

    Get PDF
    This thesis aims to address several issues emerging in the new digital world. Using Trusted Computing as the paradigmatic example of regulation though code that tries to address the cyber security problem that occurs, where the freedom of the user to reconfigure her machine is restricted in exchange for greater, yet not perfect, security. Trusted Computing is a technology that while it aims to protect the user, and the integrity of her machine and her privacy against third party users, it discloses more of her information to trusted third parties, exposing her to security risks in case of compromising occurring to that third party. It also intends to create a decentralized, bottom up solution to security where security follows along the arcs of an emergent “network of trust”, and if that was viable, to achieve a form of code based regulation. Through the analysis attempted in this thesis, we laid the groundwork for a refined assessment, considering the problems that Trusted Computing Initiative (TCI) faces and that are based in the intentional, systematic but sometimes misunderstood and miscommunicated difference (which as we reveal results directly in certain design choices for TC) between the conception of trust in informatics (“techno-trust”) and the common sociological concept of it. To reap the benefits of TCI and create the dynamic “network of trust”, we need the sociological concept of trust sharing the fundamental characteristics of transitivity and holism which are absent from techno-trust. This gives rise to our next visited problems which are: if TC shifts the power from the customer to the TC provider, who takes on roles previously reserved for the nation state, then how in a democratic state can users trust those that make the rules? The answer lies partly in constitutional and human rights law and we drill into those functions of TC that makes the TCI provider comparable to state-like and ask what minimal legal guarantees need to be in place to accept, trustingly, this shift of power. Secondly, traditional liberal contract law reduces complex social relations to binary exchange relations, which are not transitive and disrupt rather than create networks. Contract law, as we argue, plays a central role for the way in which the TC provider interacts with his customers and this thesis contributes in speculating of a contract law that does not result in atomism, rather “brings in” potentially affected third parties and results in holistic networks. In the same vein, this thesis looks mainly at specific ways in which law can correct or redefine the implicit and democratically not validated shift of power from customer to TC providers while enhancing the social environment and its social trust within which TC must operate

    Trust and integrity in distributed systems

    Get PDF
    In the last decades, we have witnessed an exploding growth of the Internet. The massive adoption of distributed systems on the Internet allows users to offload their computing intensive work to remote servers, e.g. cloud. In this context, distributed systems are pervasively used in a number of difference scenarios, such as web-based services that receive and process data, cloud nodes where company data and processes are executed, and softwarised networks that process packets. In these systems, all the computing entities need to trust each other and co-operate in order to work properly. While the communication channels can be well protected by protocols like TLS or IPsec, the problem lies in the expected behaviour of the remote computing platforms, because they are not under the direct control of end users and do not offer any guarantee that they will behave as agreed. For example, the remote party may use non-legitimate services for its own convenience (e.g. illegally storing received data and routed packets), or the remote system may misbehave due to an attack (e.g. changing deployed services). This is especially important because most of these computing entities need to expose interfaces towards the Internet, which makes them easier to be attacked. Hence, software-based security solutions alone are insufficient to deal with the current scenario of distributed systems. They must be coupled with stronger means such as hardware-assisted protection. In order to allow the nodes in distributed system to trust each other, their integrity must be presented and assessed to predict their behaviour. The remote attestation technique of trusted computing was proposed to specifically deal with the integrity issue of remote entities, e.g. whether the platform is compromised with bootkit attacks or cracked kernel and services. This technique relies on a hardware chip called Trusted Platform Module (TPM), which is available in most business class laptops, desktops and servers. The TPM plays as the hardware root of trust, which provides a special set of capabilities that allows a physical platform to present its integrity state. With a TPM equipped in the motherboard, the remote attestation is the procedure that a physical node provides hardware-based proof of the software components loaded in this platform, which can be evaluated by other entities to conclude its integrity state. Thanks to the hardware TPM, the remote attestation procedure is resistant to software attacks. However, even though the availability of this chip is high, its actual usage is low. The major reason is that trusted computing has very little flexibility, since its goal is to provide strong integrity guarantees. For instance, remote attestation result is positive if and only if the software components loaded in the platform are expected and loaded in a specific order, which limits its applicability in real-world scenarios. For such reasons, this technique is especially hard to be applied on software services running in application layer, that are loaded in random order and constantly updated. Because of this, current remote attestation techniques provide incomplete solution. They only focus on the boot phase of physical platforms but not on the services, not to mention the services running in virtual instances. This work first proposes a new remote attestation framework with the capability of presenting and evaluating the integrity state not only of the boot phase of physical platforms but also of software services at load time, e.g. whether the software is legitimate or not. The framework allows users to know and understand the integrity state of the whole life cycle of the services they are interacting with, thus the users can make informed decision whether to send their data or trust the received results. Second, based on the remote attestation framework this thesis proposes a method to bind the identity of secure channel endpoint to a specific physical platform and its integrity state. Secure channels are extensively adopted in distributed systems to protect data transmitted from one platform to another. However, they do not convey any information about the integrity state of the platform or the service that generates and receives this data, which leaves ample space for various attacks. With the binding of the secure channel endpoint and the hardware TPM, users are protected from relay attacks (with hardware-based identity) and malicious or cracked platform and software (with remote attestation). Third, with the help of the remote attestation framework, this thesis introduces a new method to include the integrity state of software services running in virtual containers in the evidence generated by the hardware TPM. This solution is especially important for softwarised network environments. Softwarised network was proposed to provide dynamic and flexible network deployment which is an ever complex task nowadays. Its main idea is to switch hardware appliances to softwarised network functions running inside virtual instances, that are full-fledged computational systems and accessible from the Internet, thus their integrity is at stake. Unfortunately, currently remote attestation work is not able to provide hardware-based integrity evidence for software services running inside virtual instances, because the direct link between the internal of virtual instances and hardware root of trust is missing. With the solution proposed in this thesis, the integrity state of the softwarised network functions running in virtual containers can be presented and evaluated with hardware-based evidence, implying the integrity of the whole softwarised network. The proposed remote attestation framework, trusted channel and trusted softwarised network are implemented in separate working prototypes. Their performance was evaluated and proved to be excellent, allowing them to be applied in real-world scenarios. Moreover, the implementation also exposes various APIs to simplify future integration with different management platforms, such as OpenStack and OpenMANO

    The Advanced Framework for Evaluating Remote Agents (AFERA): A Framework for Digital Forensic Practitioners

    Get PDF
    Digital forensics experts need a dependable method for evaluating evidence-gathering tools. Limited research and resources challenge this process and the lack of multi-endpoint data validation hinders reliability in distributed digital forensics. A framework was designed to evaluate distributed agent-based forensic tools while enabling practitioners to self-evaluate and demonstrate evidence reliability as required by the courts. Grounded in Design Science, the framework features guidelines, data, criteria, and checklists. Expert review enhances its quality and practicality

    Securing software development using developer access control

    Get PDF
    This research is aimed at software development companies and highlights the unique information security concerns in the context of a non-malicious software developer’s work environment; and furthermore explores an application driven solution which focuses specifically on providing developer environments with access control for source code repositories. In order to achieve that, five goals were defined as discussed in section 1.3. The application designed to provide the developer environment with access control to source code repositories was modelled on lessons taken from the principles of Network Access Control (NAC), Data Loss Prevention (DLP), and Google’s BeyondCorp (GBC) for zero-trust end-user computing. The intention of this research is to provide software developers with maximum access to source code without compromising Confidentiality, as per the Confidentiality, Integrity and Availability (CIA) triad. Employing data gleaned from examining the characteristics of DLP, NAC, and Beyond- Corp—proof-of-concept code was developed to regulate access to the developer’s environment and source code. The system required sufficient flexibility to support the diversity of software development environments. In order to achieve this, a modular design was selected. The system comprised a client side agent and a plug-in-ready server component. The client side agent mounts and dismounts encrypted volumes containing source code. Furthermore, it provides the server with information of the client that is demanded by plug-ins. The server side service provided encryption keys to facilitate the mounting of the volumes and, through plug-ins, asked questions of the client agent to determine whether access should be granted. The solution was then tested with integration and system testing. There were plans to have it used by development teams who were then to be surveyed as to their view on the proof of concept but this proved impossible. The conclusion provides a basis by which organisations that develop software can better balance the two corners of the CIA triad most often in conflict: Confidentiality in terms of their source code against the Availability of the same to developers

    Forensische Datenextraktion aus modernen Dateisystemen

    Get PDF
    With the ongoing development of mobile and desktop operating systems, existing file systems also get updated with new features, or the vendors introduce even new file systems. Since Android 7.0, the predominant full disk encryption gets step-by-step replaced by the new file-based encryption (FBE), implemented as an ext4 feature. Since Android 10, this disk encryption scheme gets mandatory for all new Android devices. On the side of desktop and server OSs, Microsoft introduced an all-new file system called Resilient File System (ReFS) which should replace the NFTS in the long run. Starting as a file system for servers, ReFS introduces new features to make data storage more robust and efficient. This work investigates the new technologies ext4 FBE and ReFS in several aspects of forensic data extraction. We investigate the amount of information leakage through unencrypted metadata in Android’s FBE. We propose a generic method and provide appropriate tooling to reconstruct forensic events on Android smartphones encrypted with FBE, which require no knowledge of the encryption key. Based on a dataset of 3903 applications, we show that files’ metadata can be used to reconstruct the name, version, and installation date of all installed apps. Furthermore, based on WhatsApp, we show that information leakages through metadata can even be used to reconstruct a user’s behavior depending on a specific app. To furher enhance the forensic data extraction of FBE encrypted Android disks, given a raw memory image, we present a new encryption key recovery method tailored for FBE. Furthermore, we extend The Sleuth Kit to automatically decrypt file names and file contents when working on FBE-enabled ext4 images, as well as the Plaso framework to extract events from encrypted ext4 partitions. Last but not least, we argue that the recovery of master keys from FBE partitions was straightforward due to a flaw in the encryption key derivation method by Google. On server and desktop systems, Microsoft has released ReFS, that internal structures are not officially documented. Therefore we reverse-engineered these internal structures and behavior and documented them. Based on these structures and the access processes that modify them, approaches to recover (deleted) files and older states from ReFS formatted partitions are shown. We also evaluate our implementation and the allocation strategy of the ReFS driver concerning the accuracy, runtime, and the ability to recover older file states. At last, with the knowledge of the internal ReFS structures and the threat of flaws in forensic software in mind, we implemented a structure-aware coverage guided fuzzy testing framework explicitly tailored to ReFS to find undetected security-critical flaws. With the new complex features of ReFS, the driver is growing more extensive and more complex, increasing the attack surface of the Windows kernel. Attackers can often use security-critical bugs in file system drivers to escalate privileges by mounting a well-prepared file system. Such an attack is also relevant in forensic data extraction because criminals can use it to prepare malicious disks, which will hamper or completely circumvent the extraction process by compromising the analysis environment. We demonstrate the effectiveness of our fuzzing approach by finding 27 unique payloads that panic the Windows kernel when mounting or accessing ReFS partitions. Microsoft confirmed those bugs and acknowledged eight unique CVEs which allow remote code execution attacks. With our overall findings, the forensic community should be well prepared for extracting data from modern file systems used by mobile and desktop systems. With the help of the new proposed fuzzy testing framework, forensic software can be hardened against severe anti-forensic methods by patching the discovered flaws.Mit der fortschreitenden Entwicklung von Betriebssystemen für Desktop PCs und Smartphones werden bereits existierende Dateisysteme um neue Funktionen erweitert oder sogar neue Dateisysteme eingeführt. Seit Android 7.0 wird die vorherrschende Festplattenverschlüsselung (Full Disk Encryption – FDE) Schritt für Schritt durch die neue dateibasierte Verschlüsselung (File-based Encryption – FBE) ersetzt. Seit Android 10 ist dieses Verschlüsselungsschema verpflichtend für alle neuen Android-Geräte. Auf der Seite der Desktop und Server Betriebssysteme hat Microsoft ein komplett neues Dateisystem, genannt Resilient File System (ReFS), eingeführt. Auf lange Sicht soll es das NTFS Dateisystem für Windows Betriebssysteme ersetzen. Ausgehend als ein Dateisystem für Server führt es neue Funktionen ein, welche das Speichern von Daten robuster und effizienter gestaltet. Diese Arbeit untersucht anhand verschiedener Aspekte, wie die forensische Datenextraktion aus diesen neuen Dateisystemen (ext4 mit FBE und ReFS) durchgeführt werden kann. Wir untersuchen die Menge und Art an Informationen, welche durch die unverschlüsselten Metadaten von Androids FBE preisgegeben werden. Wir schlagen eine generische Methode vor, um forensische Ereignisse auf Android Geräten welche mit FBE verschlüsselt sind zu rekonstruieren und stellen die zugehörigen Werkzeuge bereit. Diese Methode benötigt kein Wissen über den verwendeten kryptografischen Schlüssel. Wir zeigen, basierend auf einem Datensatz von 3903 Apps, dass die Metadaten von Dateien benutzt werden können, um den Namen, die Version und den Zeitpunkt der Installation von allen installierten Apps auf einem Gerät zu rekonstruieren. Des Weiteren zeigen wir anhand von WhatsApp, dass die preisgegebenen Informationen der Metadaten sogar dazu genutzt werden können, um Nutzerverhalten in spezifischen Apps zu rekonstruieren. Um die forensische Datenextraktion von FBE verschlüsseltem Android Speicher weiter zu verbessern, präsentieren wir zusätzlich eine neue Methode, um den kryptografischen Schlüssel aus einem Speicherabbild wiederherzustellen. Des Weiteren erweitern wir The Sleuth Kit, sodass Dateinamen und Dateiinhalte automatisch entschlüsselt werden, wenn das Programm auf ext4 FBE Partitionen angewandt wird. Auch das Plaso Framework wird von uns erweitert, um das Extrahieren von Ereignissen aus verschlüsselten ext4 Partitionen zu ermöglichen. Als Letztes erörtern wir, dass das Wiederherstellen der Hauptschlüssel von FBE wegen eines Fehlers von Google in der Implementierung Ableitungsfunktion unkompliziert ist. Für Desktop und Server Systeme hatt Microsoft das neue Dateistyem ReFS eingeführt, dessen interne Strukturen nicht offiziell dokumentiert sind. Deswegen haben wir das Dateisystem mit Reverse Engineering Methoden analysiert und das Verhalten des Treibers dokumentiert. Basierend auf diesen Strukturen und dem Verhalten beim Zugriff und Verwalten der Dateien zeigen wir Möglichkeiten, um (gelöschte) Dateien und ältere Zustände einer ReFS Partition wiederherzustellen. Ebenso evaluieren wir unsere Implementierung anhand der Genauigkeit, Laufzeit und Möglichkeit zur Wiederherstellung von Dateien. Auch die Einflussnahme der Allokationsstrategien des ReFS Treibers auf diese Parameter wird evaluiert. Mit dem Wissen über die internen ReFS Strukturen und die Gefährlichkeit von Schwachstellen in forensischer Software entwickelten wir zum Schluss ein Fuzzy Testing Framework, welches explizit auf ReFS zugeschnitten ist und ein Hardwarefeature nutzt, um eine hohe Codeabdeckung zu erzielen. Mit diesem fanden wir bisher unentdeckte sicherheitsrelevante Schwachstellen. Durch die neuen komplexen Funktionen von ReFS wird der Treiber immer größer und komplexer, was die Angriffsfläche vergrößert. Angreifer können häufig solche sicherheitsrelevanten Fehler in Dateisystem Treibern ausnutzen, um die Privilegien zu erhöhen, indem ein speziell präpariertes Dateisystem eingebunden wird. Solch ein Angriff ist auch für die forensische Datenextraktion relevant, da Kriminelle solche Schwachstellen nutzen können, um bösartige Festplatten anzufertigen, welche den Extraktionsprozess erschweren oder ganz verhindern können, indem die Analyseumgebung kompromittiert wird. Wir demonstrieren die Effektivität unseres Fuzzers indem wir 27 einzigartige Payloads finden, welche den Windows Kernel abstürzen lassen, sobald diese Payloads eingebunden werden oder auf Dateien dieser Dateisysteme zugegriffen wird. Microsoft bestätigte diese Fehler und bescheinigte acht verschiedene CVEs, welche Remote-Code-Ausführung erlauben. Mit unserer Arbeit sollte die forensische Gemeinschaft gut vorbereitet sein, um Daten von modernen Dateisystemen, welche auf mobilen Geräten und Desktop Systemen benutzt werden, zu extrahieren. Unser neu vorgestelltes Fuzzy Testing Framework kann dabei helfen forensische Software gegenüber schwerwiegenden anti-forensischen Methoden abzuhärten, indem Fehler in forensischer Software vor der Ausnutzung entdeckt und beseitigt werden können
    corecore