51 research outputs found

    Lockdown: Dynamic Control-Flow Integrity

    Full text link
    Applications written in low-level languages without type or memory safety are especially prone to memory corruption. Attackers gain code execution capabilities through such applications despite all currently deployed defenses by exploiting memory corruption vulnerabilities. Control-Flow Integrity (CFI) is a promising defense mechanism that restricts open control-flow transfers to a static set of well-known locations. We present Lockdown, an approach to dynamic CFI that protects legacy, binary-only executables and libraries. Lockdown adaptively learns the control-flow graph of a running process using information from a trusted dynamic loader. The sandbox component of Lockdown restricts interactions between different shared objects to imported and exported functions by enforcing fine-grained CFI checks. Our prototype implementation shows that dynamic CFI results in low performance overhead.Comment: ETH Technical Repor

    Characterizing the Power of Moving Target Defense via Cyber Epidemic Dynamics

    Full text link
    Moving Target Defense (MTD) can enhance the resilience of cyber systems against attacks. Although there have been many MTD techniques, there is no systematic understanding and {\em quantitative} characterization of the power of MTD. In this paper, we propose to use a cyber epidemic dynamics approach to characterize the power of MTD. We define and investigate two complementary measures that are applicable when the defender aims to deploy MTD to achieve a certain security goal. One measure emphasizes the maximum portion of time during which the system can afford to stay in an undesired configuration (or posture), without considering the cost of deploying MTD. The other measure emphasizes the minimum cost of deploying MTD, while accommodating that the system has to stay in an undesired configuration (or posture) for a given portion of time. Our analytic studies lead to algorithms for optimally deploying MTD.Comment: 12 pages; 4 figures; Hotsos 14, 201

    A Review on Present State-of-the-Art of Self Adaptive Dynamic Software Architecture

    Get PDF
    Enterprises across the world are increasingly depending on software to drive their businesses. It is more so with distributing computing technologies in place that pave way for realization of seamless business integration. On the other hand those complex software systems are expected to adapt changes dynamically without causing administrative overhead. Moreover software systems should exhibit fault tolerance, location transparency, availability, scalability self-adaptive capabilities to fit into present enterprise business use cases. To cope with such expectations software systems are to be built with a dynamic and self-adaptive software architecture which drives home quality of services perfectly. The point made here is that software systems are facing unprecedented level of complexity and aware of self-adaptation. Therefore it is essential to have technical knowhow pertaining to self adaptive dynamic software architecture. Towards this end, we explore present state-of-the-art of this area in software engineering domain. It throws light into dynamic software architectures, distributed component technologies for realizing such architectures, besides dynamic software composition and metrics to evaluate the quality of dynamic adaptation

    Mats: MultiCore Adaptive Trace Selection

    Get PDF
    Dynamically optimizing programs is worthwhile only if the overhead created by the dynamic optimizer is less than the benefit gained from the optimization. Program trace selection is one of the most important, yet time consuming, components of many dynamic optimizers. The dynamic application of monitoring and profiling can often result in an execution slowdown rather than speedup. Achieving significant performance gain from dynamic optimization has proven to be quite challenging. However, current technological advances, namely multicore architectures, enable us to design new approaches to meet this challenge. Selecting traces in current dynamic optimizers is typically achieved through the use of instrumentation to collect control flow information from a running application. Using instrumentation for runtime analysis requires the trace selection algorithms to be light weight, and this limits how sophisticated these algorithms can be. This is problematic because the quality of the traces can determine the potential benefits that can be gained from optimizing the traces. In many cases, even when using a lightweight approach, the overhead incurred is more than the benefit of the optimizations. In this paper we exploit the multicore architecture to design an aggressive trace selection approach that produces better traces and does not perturb the running application. 1

    Aikido: Accelerating shared data dynamic analyses

    Get PDF
    Despite a burgeoning demand for parallel programs, the tools available to developers working on shared-memory multicore processors have lagged behind. One reason for this is the lack of hardware support for inspecting the complex behavior of these parallel programs. Inter-thread communication, which must be instrumented for many types of analyses, may occur with any memory operation. To detect such thread communication in software, many existing tools require the instrumentation of all memory operations, which leads to significant performance overheads. To reduce this overhead, some existing tools resort to random sampling of memory operations, which introduces false negatives. Unfortunately, neither of these approaches provide the speed and accuracy programmers have traditionally expected from their tools. In this work, we present Aikido, a new system and framework that enables the development of efficient and transparent analyses that operate on shared data. Aikido uses a hybrid of existing hardware features and dynamic binary rewriting to detect thread communication with low overhead. Aikido runs a custom hypervisor below the operating system, which exposes per-thread hardware protection mechanisms not available in any widely used operating system. This hybrid approach allows us to benefit from the low cost of detecting memory accesses with hardware, while maintaining the word-level accuracy of a software-only approach. To evaluate our framework, we have implemented an Aikido-enabled vector clock race detector. Our results show that the Aikido enabled race-detector outperforms existing techniques that provide similar accuracy by up to 6.0x, and 76% on average, on the PARSEC benchmark suite.National Science Foundation (U.S.) (NSF grant CCF-0832997)National Science Foundation (U.S.) (DOE SC0005288)United States. Defense Advanced Research Projects Agency (DARPA HR0011-10- 9-0009

    Transparent dynamic instrumentation

    Get PDF
    Process virtualization provides a virtual execution environment within which an unmodified application can be monitored and controlled while it executes. The provided layer of control can be used for purposes ranging from sandboxing to compatibility to profiling. The additional operations required for this layer are performed clandestinely alongside regular program execution. Software dynamic instrumentation is one method for implementing process virtualization which dynamically instruments an application such that the application's code and the inserted code are interleaved together. DynamoRIO is a process virtualization system implemented using software code cache techniques that allows users to build customized dynamic instrumentation tools. There are many challenges to building such a runtime system. One major obstacle is transparency. In order to support executing arbitrary applications, DynamoRIO must be fully transparent so that an application cannot distinguish between running inside the virtual environment and native execution. In addition, any desired extra operations for a particular tool must avoid interfering with the behavior of the application. Transparency has historically been provided on an ad-hoc basis, as a reaction to observed problems in target applications. This paper identifies a necessary set of transparency requirements for running mainstream Windows and Linux applications. We discuss possible solutions to each transparency issue, evaluate tradeoffs between different choices, and identify cases where maintaining transparency is not practically solvable. We believe this will provide a guideline for better design and implementation of transparent dynamic instrumentation, as well as other similar process virtualization systems using software code caches
    • …
    corecore