50,954 research outputs found
SGXIO: Generic Trusted I/O Path for Intel SGX
Application security traditionally strongly relies upon security of the
underlying operating system. However, operating systems often fall victim to
software attacks, compromising security of applications as well. To overcome
this dependency, Intel introduced SGX, which allows to protect application code
against a subverted or malicious OS by running it in a hardware-protected
enclave. However, SGX lacks support for generic trusted I/O paths to protect
user input and output between enclaves and I/O devices.
This work presents SGXIO, a generic trusted path architecture for SGX,
allowing user applications to run securely on top of an untrusted OS, while at
the same time supporting trusted paths to generic I/O devices. To achieve this,
SGXIO combines the benefits of SGX's easy programming model with traditional
hypervisor-based trusted path architectures. Moreover, SGXIO can tweak insecure
debug enclaves to behave like secure production enclaves. SGXIO surpasses
traditional use cases in cloud computing and makes SGX technology usable for
protecting user-centric, local applications against kernel-level keyloggers and
likewise. It is compatible to unmodified operating systems and works on a
modern commodity notebook out of the box. Hence, SGXIO is particularly
promising for the broad x86 community to which SGX is readily available.Comment: To appear in CODASPY'1
Poster: Userland Containers for Mobile Systems
Mobile platforms are not rising to their potential as ubiquitous computers, in large part because of the constraints we impose on their apps in the name of security. Mobile operating systems have long struggled with the challenge of isolating untrusted apps. In pursuit of a secure runtime environment, Android and iOS isolate apps inside a gulag of platform-imposed programming languages and runtime libraries, leaving few design decisions to the application developers. These thick layers of custom software eschew app portability and maintainability, as development teams must continually tweak their apps to support modifications to the OS\u27s runtime libraries. Nonstandard and ever-changing interfaces to those APIs invite bugs in the operating system and apps alike.
Mobile-only APIs have bifurcated the population of software running on our devices. On one side sits the conventional PC and server programs: compilers, shells, servers, daemons, and many others that use the standard libraries and programming models to interface with the computer and the outside world. On the other side lives the apps: mobile-only and purpose-built, they often serve as user interfaces to some larger cloud-based system. Under the weight of the numerous OS-imposed platform constraints, it is difficult for app developers to innovate: large classes of applications are simply impossible to port to mobile devices because the required APIs are unsupported. To deal with these cross platform dependencies, it is necessary to maintain multiple code bases. In the past, dependency issues have typically been solved through the use of containers. However, deploying containers on mobile systems present unique challenges. To maintain security, mobile operating systems do not give users permission to launch Docker containers.
To solve this issue, we consider an older idea known as user-land containerization. Userland containerization allows userland containers to be launched by regular unprivileged users in any Linux or Android based system. Userland containerization works by inserting a modified operating system kernel between the host kernel and the guest processes.
We have done an in depth study on the performance of user-mode containers like the user mode linux (UML) kernel [1], repurposing it as a userland hypervisor between the host kernel and the guest processes. We prototype a proof-of-concept usermode kernel with an implementation that is guided by the findings of our empirical study. Our kernel introduces a new technique---similar to paravirtualization---to optimize the syscall interface between the guest process and the usermode kernel to improve its I/O performance. The redesigned syscall interface provides I/O performance that approaches that of conventional virtualization techniques. Our paravirtualization strategies outperform UML by a factor of 3--6X for I/O bound workloads. Furthermore, we achieve 3.5--5X more network throughput and equal disk write speed compared to VMWare Workstation. Although there is still ample opportunity for performance improvements, our approach demonstrates the promise and potential of a usable userland virtualization platform that balances security with performance
Recommended from our members
Hardware Monitors for Secure Processing in Embedded Operating Systems
Embedded processors are being increasingly used in our daily life and have become an important part of many types of infrastructure in the world. As people start depending more on embedded systems for personal and business processing operations, the attacks on these systems have also been on a rise. Existing defense mechanisms targeted for desktop and server processors cannot be used to defend embedded systems as these system exhibit constraints on processing performance and processing power and energy. Thus, embedded systems require low overhead security approaches to ensure that they are protected from attacks.
This thesis describes a hardware based approach to monitor the operation of an embedded processor instruction-by-instruction, where deviations from expected program behavior are detected within the time associated with the execution of an instruction. Previous work in this area has focused on monitoring a single task on a CPU while here a novel hardware monitoring system that can monitor multiple active tasks in an operating-system-based platform is presented. This approach doesn’t need any change in application binary code. The hardware monitor is able to track context switches that occur in the operating system and ensure that monitoring is performed continuously, thus ensuring system security.
This thesis describes the design of the system as well as results obtained from a prototype implementation of the system on an Altera DE4 FPGA board. It is demonstrated in hardware that applications can be monitored at instruction level without execution slow-down and buffer overflow attacks can be defeated using this system. When an attack occurs, it is detected within a cycle and the attack task is killed before it can harm the system. The system uses an off-chip DRAM for storing the application binary and the operating system kernel. A centralized graph memory is implemented on-chip to support the storage of all monitoring graphs associated with the system. MiBench benchmarks such as qsort, bitcount, stringmatch, basicmath and dijkstra are used to evaluate the system
Direct User Calls from the Kernel: Design and Implementation
Traditional, general-purpose operating systems strictly separate user processes from the kernel. Processes can only communicate with the kernel through system calls. As a means to ensure system security, system calls inevitably involve performance overhead. Direct User Callback from the Kernel, or DUCK, is a framework that improves the performance of network-centric applications by executing a part of application code directly from inside the kernel. Because the code runs in kernel mode and can access kernel memory directly, DUCK is able to eliminate two important sources of system call overhead, namely mode switches and data copying. One issue with DUCK is how to design an application programming interface (API) that is general, efficient, and easy to use. In this thesis, we present the design of the DUCK API, which includes functions for both direct user code execution and zero-copy buffer management. We have implemented DUCK prototypes on the Solaris/SPARC platform. An efficient way to implement direct user code invocation is through memory sharing between the kernel and user processes. However, because Solaris/SPARC separates the user and kernel address spaces, achieving memory sharing is difficult. In the thesis, we study the SPARC architecture and the Solaris virtual memory subsystem, and discuss three potential approaches to support memory sharing required by DUCK. We proceed to present micro-benchmark experiments demonstrating that our DUCK prototype implementation is capable of improving the peak throughput of a simple UDP forwarder by 28% to 44%
Glider: A GPU Library Driver for Improved System Security
Legacy device drivers implement both device resource management and
isolation. This results in a large code base with a wide high-level interface
making the driver vulnerable to security attacks. This is particularly
problematic for increasingly popular accelerators like GPUs that have large,
complex drivers. We solve this problem with library drivers, a new driver
architecture. A library driver implements resource management as an untrusted
library in the application process address space, and implements isolation as a
kernel module that is smaller and has a narrower lower-level interface (i.e.,
closer to hardware) than a legacy driver. We articulate a set of device and
platform hardware properties that are required to retrofit a legacy driver into
a library driver. To demonstrate the feasibility and superiority of library
drivers, we present Glider, a library driver implementation for two GPUs of
popular brands, Radeon and Intel. Glider reduces the TCB size and attack
surface by about 35% and 84% respectively for a Radeon HD 6450 GPU and by about
38% and 90% respectively for an Intel Ivy Bridge GPU. Moreover, it incurs no
performance cost. Indeed, Glider outperforms a legacy driver for applications
requiring intensive interactions with the device driver, such as applications
using the OpenGL immediate mode API
- …