85 research outputs found
Moving Target Defense Using Live Migration of Docker Containers
abstract: Today the information technology systems have addresses, software stacks and other configuration remaining unchanged for a long period of time. This paves way for malicious attacks in the system from unknown vulnerabilities. The attacker can take advantage of this situation and plan their attacks with sufficient time. To protect our system from this threat, Moving Target Defense is required where the attack surface is dynamically changed, making it difficult to strike.
In this thesis, I incorporate live migration of Docker container using CRIU (checkpoint restore) for moving target defense. There are 460K Dockerized applications, a 3100% growth over 2 years[1]. Over 4 billion containers have been pulled so far from Docker hub. Docker is supported by a large and fast growing community of contributors and users. As an example, there are 125K Docker Meetup members worldwide. As we see industry adapting to Docker rapidly, a moving target defense solution involving containers is beneficial for being robust and fast. A proof of concept implementation is included for studying performance attributes of Docker migration.
The detection of attack is using a scenario involving definitions of normal events on servers. By defining system activities, and extracting syslog in centralized server, attack can be detected via extracting abnormal activates and this detection can be a trigger for the Docker migration.Dissertation/ThesisMasters Thesis Computer Science 201
The Proceedings of 14th Australian Digital Forensics Conference, 5-6 December 2016, Edith Cowan University, Perth, Australia
Conference Foreword
This is the fifth year that the Australian Digital Forensics Conference has been held under the banner of the Security Research Institute, which is in part due to the success of the security conference program at ECU. As with previous years, the conference continues to see a quality papers with a number from local and international authors. 11 papers were submitted and following a double blind peer review process, 8 were accepted for final presentation and publication. Conferences such as these are simply not possible without willing volunteers who follow through with the commitment they have initially made, and I would like to take this opportunity to thank the conference committee for their tireless efforts in this regard. These efforts have included but not been limited to the reviewing and editing of the conference papers, and helping with the planning, organisation and execution of the conference. Particular thanks go to those international reviewers who took the time to review papers for the conference, irrespective of the fact that they are unable to attend this year.
To our sponsors and supporters a vote of thanks for both the financial and moral support provided to the conference. Finally, to the student volunteers and staff of the ECU Security Research Institute, your efforts as always are appreciated and invaluable. Yours sincerely, Conference Chair Professor Craig Valli Director, Security Research Institut
Securing unikernels in cloud infrastructures
PhD ThesisCloud computing adoption has seen an increase during the last few years.
However, cloud tenants are still concerned about the security that the Cloud
Service Provider (CSP) offers. Recent security incidents in cloud infrastructures that exploit vulnerabilities in the software layer highlight
the need to develop new protection mechanisms. A recent direction in
cloud computing is toward massive consolidation of resources by using
lightweight Virtual Machines (VMs) called unikernels. Unikernels are
specialised VMs that eliminate the Operating System (OS) layer and include the advantages of small footprint, minimal attack surface, nearinstant boot times and multi-platform deployment. Even though using
unikernels has certain advantages, unikernels employ a number of shortcomings. First, unikernels do not employ context switching from user to
kernel mode. A malicious user could exploit this shortcoming to escape
the isolation boundaries that the hypervisor provides. Second, having a
large number of unikernels in a single virtualised host creates complex security policies that are difficult to manage and can introduce exploitable
misconfigurations. Third, malicious insiders, such as disgruntled system
administrators can use privileged software to exfiltrate data from unikernels. In this thesis, we divide our research into two parts, concerning the
development of software and hardware-based protection mechanisms for
cloud infrastructures that focus on unikernels. In each part, we propose
a new protection mechanism for cloud infrastructures, where tenants develop their workloads using unikernels.
In the first part, we propose a software-based protection mechanism that
controls access to resources, which results on creating least-privileged
unikernels. Current access-control mechanisms that reside in hypervisors
do not confine unikernels to accepted behaviour and are susceptible to
privilege escalation and Virtual Machine escapes attacks. Therefore, current hypervisors need to take into account the possibility of having one or
more malicious unikernels and rethink their access-control mechanisms.
We designed and implemented VirtusCap, a capability-based access control mechanism that acts as a lower layer of regulating access to resources
in cloud infrastructures. Consequently, unikernels are only assigned the
privileges required to perform their task. This ensures that the accesscontrol mechanism that resides in the hypervisor will only grant access to
resources specified with capabilities. In addition, capabilities are easier to
delegate to other unikernels when they need to and the security policies are
less complex. Our performance evaluation shows that up to request rate of
7000 (req/sec) our prototype’s response time is identical to XSM-Flask.
In the second part, we address the following problem: how to guarantee
the confidentiality and integrity of computations executing in a unikernel
even in the presence of privileged software used by malicious insiders?
A research prototype was designed and implemented called UniGuard,
which aims to protect unikernels from an untrusted cloud, by executing
the sensitive computations inside secure enclaves. This approach provides
confidentiality and integrity guarantees for unikernels against software and
certain physical attacks. We show how we integrated Intel SGX with
unikernels and added the ability to spawn enclaves that execute the sensitive computations. We conduct experiments to evaluate the performance
of UniGuard, which show that UniGuard exhibits acceptable performance
overhead in comparison to when the sensitive computations are not executed inside a enclave. To the best of our knowledge, UniGuard is the first
solution that protects the confidentiality and integrity of computations that
execute inside unikernels using Intel SGX.
Currently, unikernels drive the next generation of virtualisation software
and especially the cooperation with other virtualisation technologies, such
as containers to form hybrid virtualisation workloads. Thus, it is paramount
to scrutinise the security of unikernels in cloud infrastructures and propose
novel protection mechanisms that will drive the next cloud evolution
Intel TDX Demystified: A Top-Down Approach
Intel Trust Domain Extensions (TDX) is a new architectural extension in the
4th Generation Intel Xeon Scalable Processor that supports confidential
computing. TDX allows the deployment of virtual machines in the
Secure-Arbitration Mode (SEAM) with encrypted CPU state and memory, integrity
protection, and remote attestation. TDX aims to enforce hardware-assisted
isolation for virtual machines and minimize the attack surface exposed to host
platforms, which are considered to be untrustworthy or adversarial in the
confidential computing's new threat model. TDX can be leveraged by regulated
industries or sensitive data holders to outsource their computations and data
with end-to-end protection in public cloud infrastructure.
This paper aims to provide a comprehensive understanding of TDX to potential
adopters, domain experts, and security researchers looking to leverage the
technology for their own purposes. We adopt a top-down approach, starting with
high-level security principles and moving to low-level technical details of
TDX. Our analysis is based on publicly available documentation and source code,
offering insights from security researchers outside of Intel
Infrastructural Security for Virtualized Grid Computing
The goal of the grid computing paradigm is to make computer power as easy to access as an electrical power grid. Unlike the power grid, the computer grid uses remote resources located at a service provider. Malicious users can abuse the provided resources, which not only affects their own systems but also those of the provider and others.
Resources are utilized in an environment where sensitive programs and data from competitors are processed on shared resources, creating again the potential for misuse. This is one of the main security issues, since in a business environment competitors distrust each other, and the fear of industrial espionage is always present. Currently, human trust is the strategy used to deal with these threats. The relationship between grid users and resource providers ranges from highly trusted to highly untrusted. This wide trust relationship occurs because grid computing itself changed from a research topic with few users to a widely deployed product that included early commercial adoption. The traditional open research communities have very low security requirements, while in contrast, business customers often operate on sensitive data that represents intellectual property; thus, their security demands are very high. In traditional grid computing, most users share the same resources concurrently. Consequently, information regarding other users and their jobs can usually be acquired quite easily. This includes, for example, that a user can see which processes are running on another user´s system. For business users, this is unacceptable since even the meta-data of their jobs is classified. As a consequence, most commercial customers are not convinced that their intellectual property in the form of software and data is protected in the grid.
This thesis proposes a novel infrastructural security solution that advances the concept of virtualized grid computing. The work started back in 2007 and led to the development of the XGE, a virtual grid management software. The XGE itself uses operating system virtualization to provide a virtualized landscape. Users’ jobs are no longer executed in a shared manner; they are executed within special sandboxed environments. To satisfy the requirements of a traditional grid setup, the solution can be coupled with an installed scheduler and grid middleware on the grid head node. To protect the prominent grid head node, a novel dual-laned demilitarized zone is introduced to make attacks more difficult. In a traditional grid setup, the head node and the computing nodes are installed in the same network, so a successful attack could also endanger the user´s software and data. While the zone complicates attacks, it is, as all security solutions, not a perfect solution. Therefore, a network intrusion detection system is enhanced with grid specific signatures. A novel software called Fence is introduced that supports end-to-end encryption, which means that all data remains encrypted until it reaches its final destination. It transfers data securely between the user´s computer, the head node and the nodes within the shielded, internal network. A lightweight kernel rootkit detection system assures that only trusted kernel modules can be loaded. It is no longer possible to load untrusted modules such as kernel rootkits. Furthermore, a malware scanner for virtualized grids scans for signs of malware in all running virtual machines. Using virtual machine introspection, that scanner remains invisible for most types of malware and has full access to all system calls on the monitored system. To speed up detection, the load is distributed to multiple detection engines simultaneously. To enable multi-site service-oriented grid applications, the novel concept of public virtual nodes is presented. This is a virtualized grid node with a public IP address shielded by a set of dynamic firewalls. It is possible to create a set of connected, public nodes, either present on one or more remote grid sites. A special web service allows users to modify their own rule set in both directions and in a controlled manner.
The main contribution of this thesis is the presentation of solutions that convey the security of grid computing infrastructures. This includes the XGE, a software that transforms a traditional grid into a virtualized grid. Design and implementation details including experimental evaluations are given for all approaches. Nearly all parts of the software are available as open source software. A summary of the contributions and an outlook to future work conclude this thesis
Recommended from our members
A Secure and Formally Verified Commodity Multiprocessor Hypervisor
Commodity hypervisors are widely deployed to support virtual machines on multiprocessor server hardware. Modern hypervisors are complex and often integrated with an operating system kernel, posing a significant security risk as writing large, multiprocessor systems software is error-prone. Attackers that successfully exploit hypervisor vulnerabilities may gain unfettered access to virtual machine data and compromise the confidentiality and integrity of virtual machine data. Theoretically, formal verification offers a solution to this problem, by proving that the hypervisor implementation contains no vulnerabilities and protects virtual machine data under all circumstances. However, it remains unknown how one might feasibly verify the entire codebase of a complex, multiprocessor commodity system. My thesis is that modest changes to a commodity system can reduce the required proof effort such that it becomes possible to verify the security properties of the entire system.
This dissertation introduces microverification, a new approach for formally verifying the security properties of commodity systems. Microverification reduces the proof effort for a commodity system by retrofitting the system into a small core and a set of untrusted services, thus making it possible to reason about properties of the entire system by verifying the core alone. To verify the multiprocessor hypervisor core, we introduce security-preserving layers to modularize the proof without hiding information leakage so we can prove each layer of the implementation refines its specification, and the top layer specification is refined by all layers of the core implementation. To verify commodity hypervisor features that require dynamically changing information flow, we incorporate data oracles to mask intentional information flow. We can then prove noninterference at the top layer specification and guarantee the resulting security properties hold for the entire hypervisor implementation. Using microverification, we retrofitted the Linux KVM hypervisor with only modest modifications to its codebase. Using Coq, we proved that the hypervisor protects the confidentiality and integrity of VM data, including correctly managing tagged TLBs, shared multi-level page tables, and caches. Our work is the first machine-checked security proof for a commodity multiprocessor hypervisor. Experimental results with real application workloads demonstrate that verified KVM retains KVM’s functionality and performance
Recommended from our members
Transiently Powered Computers
Demand for compact, easily deployable, energy-efficient computers has driven the development of general-purpose transiently powered computers (TPCs) that lack both batteries and wired power, operating exclusively on energy harvested from their surroundings.
TPCs\u27 dependence solely on transient, harvested power offers several important design-time benefits. For example, omitting batteries saves board space and weight while obviating the need to make devices physically accessible for maintenance. However, transient power may provide an unpredictable supply of energy that makes operation difficult. A predictable energy supply is a key abstraction underlying most electronic designs. TPCs discard this abstraction in favor of opportunistic computation that takes advantage of available resources. A crucial question is how should a software-controlled computing device operate if it depends completely on external entities for power and other resources? The question poses challenges for computation, communication, storage, and other aspects of TPC design.
The main idea of this work is that software techniques can make energy harvesting a practicable form of power supply for electronic devices. Its overarching goal is to facilitate the design and operation of usable TPCs.
This thesis poses a set of challenges that are fundamental to TPCs, then pairs these challenges with approaches that use software techniques to address them. To address the challenge of computing steadily on harvested power, it describes Mementos, an energy-aware state-checkpointing system for TPCs. To address the dependence of opportunistic RF-harvesting TPCs on potentially untrustworthy RFID readers, it describes CCCP, a protocol and system for safely outsourcing data storage to RFID readers that may attempt to tamper with data. Additionally, it describes a simulator that facilitates experimentation with the TPC model, and a prototype computational RFID that implements the TPC model.
To show that TPCs can improve existing electronic devices, this thesis describes applications of TPCs to implantable medical devices (IMDs), a challenging design space in which some battery-constrained devices completely lack protection against radio-based attacks. TPCs can provide security and privacy benefits to IMDs by, for instance, cryptographically authenticating other devices that want to communicate with the IMD before allowing the IMD to use any of its battery power. This thesis describes a simplified IMD that lacks its own radio, saving precious battery energy and therefore size. The simplified IMD instead depends on an RFID-scale TPC for all of its communication functions.
TPCs are a natural area of exploration for future electronic design, given the parallel trends of energy harvesting and miniaturization. This work aims to establish and evaluate basic principles by which TPCs can operate
Virtual Machine Image Management for Elastic Resource Usage in Grid Computing
Grid Computing has evolved from an academic concept to a powerful paradigm in the area of high performance computing (HPC). Over the last few years, powerful Grid computing solutions were developed that allow the execution of computational tasks on distributed computing resources. Grid computing has recently attracted many commercial customers. To enable commercial customers to be able to execute sensitive data in the Grid, strong security mechanisms must be put in place to secure the customers' data.
In contrast, the development of Cloud Computing, which entered the scene in 2006, was driven by industry: it was designed with respect to security from the beginning. Virtualization technology is used to separate the users e.g., by putting the different users of a system inside a virtual machine, which prevents them from accessing other users' data.
The use of virtualization in the context of Grid computing has been examined early and was found to be a promising approach to counter the security threats that have appeared with commercial customers.
One main part of the work presented in this thesis is the Image Creation Station (ICS), a component which allows users to administer their virtual execution environments (virtual machines) themselves and which is responsible for managing and distributing the virtual machines in the entire system.
In contrast to Cloud computing, which was designed to allow even inexperienced users to execute their computational tasks in the Cloud easily, Grid computing is much more complex to use. The ICS makes it easier to use the Grid by overcoming traditional limitations like installing needed software on the compute nodes that users use to execute the computational tasks. This allows users to bring commercial software to the Grid for the first time, without the need for local administrators to install the software to computing nodes that are accessible by all users. Moreover, the administrative burden is shifted from the local Grid site's administrator to the users or experienced software providers that allow the provision of individually tailored virtual machines to each user. But the ICS is not only responsible for enabling users to manage their virtual machines themselves, it also ensures that the virtual machines are available on every site that is part of the distributed Grid system.
A second aspect of the presented solution focuses on the elasticity of the system by automatically acquiring free external resources depending on the system's current workload. In contrast to existing systems, the presented approach allows the system's administrator to add or remove resource sets during runtime without needing to restart the entire system. Moreover, the presented solution allows users to not only use existing Grid resources but allows them to scale out to Cloud resources and use these resources on-demand. By ensuring that unused resources are shut down as soon as possible, the computational costs of a given task are minimized. In addition, the presented solution allows each user to specify which resources can be used to execute a particular job. This is useful when a job processes sensitive data e.g., that is not allowed to leave the company. To obtain a comparable function in today's systems, a user must submit her computational task to a particular resource set, losing the ability to automatically schedule if more than one set of resources can be used.
In addition, the proposed solution prioritizes each set of resources by taking different metrics into account (e.g. the level of trust or computational costs) and tries to schedule the job to resources with the highest priority first. It is notable that the priority often mimics the physical distance from the resources to the user: a locally available Cluster usually has a higher priority due to the high level of trust and the computational costs, that are usually lower than the costs of using Cloud resources. Therefore, this scheduling strategy minimizes the costs of job execution by improving security at the same time since data is not necessarily transferred to remote resources and the probability of attacks by malicious external users is minimized.
Bringing both components together results in a system that adapts automatically to the current workload by using external (e.g., Cloud) resources together with existing locally available resources or Grid sites and provides individually tailored virtual execution environments to the system's users
- …