36 research outputs found
RAS-Models: A Building Block for Self-Healing Benchmarks
To evaluate the efficacy of self-healing systems a rigorous, objective, quantitative benchmarking methodology is needed. However, developing such a benchmark is a non-trivial task given the many evaluation issues to be resolved, including but not limited to: quantifying the impacts of faults, analyzing various styles of healing (reactive, preventative, proactive), accounting for partially automated healing and accounting for incomplete/imperfect healing. We posit, however,that it is possible to realize a self-healing benchmark using a collection of analytical techniques and practical tools as building blocks. This paper highlights the flexibility of one analytical tool, the Reliability, Availability and Serviceability (RAS) model, and illustrates its power and relevance to the problem of evaluating self-healing mechanisms/systems, when combined with practical tools for fault-injection
The Role of Reliability, Availability and Serviceability (RAS) Models in the Design and Evaluation of Self-Healing Systems
In an idealized scenario, self-healing systems predict, prevent or diagnose problems and take the appropriate actions to mitigate their impact with minimal human intervention. To determine how close we are to reaching this goal we require analytical techniques and practical approaches that allow us to quantify the effectiveness of a system's remediations mechanisms. In this paper we apply analytical techniques based on Reliability, Availability and Serviceability (RAS) models to evaluate individual remediation mechanisms of select system components and their combined effects on the system. We demonstrate the applicability of RAS-models to the evaluation of self-healing systems by using them to analyze various styles of remediations (reactive, preventative etc.), quantify the impact of imperfect remediations, identify sub-optimal (less effective) remediations and quantify the combined effects of all the activated remediations on the system as a whole
Linux Kernel Vulnerabilities: State-of-the-Art Defenses and Open Problems
Avoiding kernel vulnerabilities is critical to achieving security of many systems, because the kernel is often part of the trusted computing base. This paper evaluates the current state-of-the-art with respect to kernel protection techniques, by presenting two case studies of Linux kernel vulnerabilities. First, this paper presents data on 141 Linux kernel vulnerabilities discovered from January 2010 to March 2011, and second, this paper examines how well state-of-the-art techniques address these vulnerabilities. The main findings are that techniques often protect against certain exploits of a vulnerability but leave other exploits of the same vulnerability open, and that no effective techniques exist to handle semantic vulnerabilities---violations of high-level security invariants.United States. Defense Advanced Research Projects Agency. Clean-slate design of Resilient, Adaptive, Secure Hosts (Contract #N66001-10-2-4089
Faults in Linux 2.6
In August 2011, Linux entered its third decade. Ten years before, Chou et al.
published a study of faults found by applying a static analyzer to Linux
versions 1.0 through 2.4.1. A major result of their work was that the drivers
directory contained up to 7 times more of certain kinds of faults than other
directories. This result inspired numerous efforts on improving the reliability
of driver code. Today, Linux is used in a wider range of environments, provides
a wider range of services, and has adopted a new development and release model.
What has been the impact of these changes on code quality? To answer this
question, we have transported Chou et al.'s experiments to all versions of
Linux 2.6; released between 2003 and 2011. We find that Linux has more than
doubled in size during this period, but the number of faults per line of code
has been decreasing. Moreover, the fault rate of drivers is now below that of
other directories, such as arch. These results can guide further development
and research efforts for the decade to come. To allow updating these results as
Linux evolves, we define our experimental protocol and make our checkers
available
Recommended from our members
Robusta: Taming the Native Beast of the JVM
Java applications often need to incorporate native-code components for efficiency and for reusing legacy code. However, it is well known that the use of native code defeats Java's security model. We describe the design and implementation of Robusta, a complete framework that provides safety and security to native code in Java applications. Starting from software-based fault isolation (SFI), Robusta isolates native code into a sandbox where dynamic linking/loading of libraries is supported and unsafe system modification and confidentiality violations are prevented. It also mediates native system calls according to a security policy by connecting to Java's security manager. Our prototype implementation of Robusta is based on Native Client and OpenJDK. Experiments in this prototype demonstrate Robusta is effective and efficient, with modest runtime overhead on a set of JNI benchmark programs. Robusta can be used to sandbox native libraries used in Java's system classes to prevent attackers from exploiting bugs in the libraries. It can also enable trustworthy execution of mobile Java programs with native libraries. The design of Robusta should also be applicable when other type-safe languages (e.g., C#, Python) want to ensure safe interoperation with native libraries.Engineering and Applied Science
Recommended from our members
Combining Control-Flow Integrity and Static Analysis for Efficient and Validated Data Sandboxing
In many software attacks, inducing an illegal control-flow transfer in the target system is one common step. Control-Flow Integrity (CFI) protects a software system by enforcing a pre-determined control-flow graph. In addition to providing strong security, CFI enables static analysis on low-level code. This paper evaluates whether CFI-enabled static analysis can help build efficient and validated data sandboxing. Previous systems generally sandbox memory writes for integrity, but avoid protecting confidentiality due to the high overhead of sandboxing memory reads. To reduce overhead, we have implemented a series of optimizations that remove sandboxing instructions if they are proven unnecessary by static analysis. On top of CFI, our system adds only 2.7% runtime overhead on SPECint2000 for sandboxing memory writes and adds modest 19% for sandboxing both reads and writes. We have also built a principled data-sandboxing verifier based on range analysis. The verifier checks the safety of the results of the optimizer, which removes the need to trust the rewriter and optimizer. Our results show that the combination of CFI and static analysis has the potential of bringing down the cost of general inlined reference monitors, while maintaining strong security.Engineering and Applied Science
Bringing Reconfigurability to the Network Stack
Reconfiguring the network stack allows applications to specialize the
implementations of communication libraries depending on where they run, the
requests they serve, and the performance they need to provide. Specializing
applications in this way is challenging because developers need to choose the
libraries they use when writing a program and cannot easily change them at
runtime. This paper introduces Bertha, which allows these choices to be changed
at runtime without limiting developer flexibility in the choice of network and
communication functions. Bertha allows applications to safely use optimized
communication primitives (including ones with deployment limitations) without
limiting deployability. Our evaluation shows cases where this results in 16x
higher throughput and 63% lower latency than current portable approaches while
imposing minimal overheads when compared to a hand-optimized versions that use
deployment-specific communication primitives.Comment: 12 pages, 10 figure