52 research outputs found

    Using Execution Transactions To Recover From Buffer Overflow Attacks

    Get PDF
    We examine the problem of containing buffer overflow attacks in a safe and efficient manner. Briefly, we automatically augment source code to dynamically catch stack and heap-based buffer overflow and underflow attacks, and recover from them by allowing the program to continue execution. Our hypothesis is that we can treat each code function as a transaction that can be aborted when an attack is detected, without affecting the application's ability to correctly execute. Our approach allows us to selectively enable or disable components of this defensive mechanism in response to external events, allowing for a direct tradeoff between security and performance. We combine our defensive mechanism with a honeypot-like configuration to detect previously unknown attacks and automatically adapt an application's defensive posture at a negligible performance cost, as well as help determine a worm's signature. The main benefits of our scheme are its low impact on application performance, its ability to respond to attacks without human intervention, its capacity to handle previously unknown vulnerabilities, and the preservation of service availability. We implemented a stand-alone tool, DYBOC, which we use to instrument a number of vulnerable applications. Our performance benchmarks indicate a slow-down of 20% for Apache in full-protection mode, and 1.2% with partial protection. We validate our transactional hypothesis via two experiments: first, by applying our scheme to 17 vulnerable applications, successfully fixing 14 of them; second, by examining the behavior of Apache when each of 154 potentially vulnerable routines are made to fail, resulting in correct behavior in 139 of cases

    A Network Worm Vaccine Architecture

    Get PDF
    The ability of worms to spread at rates that effectively preclude human-directed reaction has elevated them to a first-class security threat to distributed systems. We present the first reaction mechanism that seeks to automatically patch vulnerable software. Our system employs a collection of sensors that detect and capture potential worm infection vectors. We automatically test the effects of these vectors on appropriately-instrumented sandboxed instances of the targeted application, trying to identify the exploited software weakness. Our heuristics allow us to automatically generate patches that can protect against certain classes of attack, and test the resistance of the patched application against the infection vector. We describe our system architecture, discuss the various components, and propose directions for future research

    Automatic Error Elimination by Multi-Application Code Transfer

    Get PDF
    We present pDNA, a system for automatically transfer- ring correct code from donor applications into recipient applications to successfully eliminate errors in the recipient. Experimental results using six donor applications to eliminate nine errors in six recipient applications highlight the ability of pDNA to transfer code across applications to eliminate otherwise fatal integer and buffer overflow errors. Because pDNA works with binary donors with no need for source code or symbolic information, it supports a wide range of use cases. To the best of our knowledge, pDNA is the first system to eliminate software errors via the successful transfer of correct code across applications

    Automatic Discovery and Patching of Buffer and Integer Overflow Errors

    Get PDF
    We present Targeted Automatic Patching (TAP), an automatic buffer and integer overflow discovery and patching system. Starting with an application and a seed input that the application processes correctly, TAP dynamically analyzes the execution of the application to locate target memory allocation sites and statements that access dynamically or statically allocated blocks of memory. It then uses targeted error-discovery techniques to automatically generate inputs that trigger integer and/or buffer overflows at the target sites. When it discovers a buffer or integer overflow error, TAP automatically matches and applies patch templates to generate patches that eliminate the error. Our experimental results show that TAP successfully discovers and patches two buffer and six integer overflow errors in six real-world applications

    Horizontal Code Transfer via Program Fracture and Recombination

    Get PDF
    We present a new horizontal code transfer technique, program fracture and recombination, for automatically replacing, deleting, and/or combining code from multiple applications. Benefits include automatic generation of new applications incorporating the best or most desirable functionality developed anywhere, the automatic elimination of security vulnerabilities, effective software rejuvenation, the automatic elimination of obsolete or undesirable functionality, and improved performance, simplicity, analyzability, and clarity

    Automatic Error Elimination by Multi-Application Code Transfer

    Get PDF
    We present Code Phage (CP), a system for automatically transferring correct code from donor applications into recipient applications to successfully eliminate errors in the recipient. Experimental results using six donor applications to eliminate nine errors in six recipient applications highlight the ability of CP to transfer code across applications to eliminate otherwise fatal integer and buffer over- flow errors. Because CP works with binary donors with no need for source code or symbolic information, it supports a wide range of use cases. To the best of our knowledge, CP is the first system to eliminate software errors via the successful transfer of correct code across applications

    Program fracture and recombination for efficient automatic code reuse

    Full text link
    Abstract—We present a new code transfer technique, program fracture and recombination, for automatically replacing, delet-ing, and/or combining code from multiple applications. Benefits include automatic generation of new applications incorporating the best or most desirable functionality developed anywhere, the automatic elimination of errors and security vulnerabilities, effective software rejuvenation, the automatic elimination of obsolete or undesirable functionality, and improved performance, energy efficiency, simplicity, analyzability, and clarity. The technique may be particularly appropriate for high performance computing. The field has devoted years of effort to developing efficient (but complex) implementations of standard linear algebra operations with good numerical properties. At the same time these operations also have very simple but inefficient implementations, often with poor numerical properties. Program fracture and recombination allows developers to work with the simple implementation during development and testing, then use program fracture and recombination to automatically find and deploy the most appropriate implementation for the hardware platform at hand. The benefits include reduced implementation effort, increased code clarity, and the ability to automatically search for and find efficient implementations with good numerical properties. I

    Managing performance vs. accuracy trade-offs with loop perforation

    Get PDF
    Many modern computations (such as video and audio encoders, Monte Carlo simulations, and machine learning algorithms) are designed to trade off accuracy in return for increased performance. To date, such computations typically use ad-hoc, domain-specific techniques developed specifically for the computation at hand. Loop perforation provides a general technique to trade accuracy for performance by transforming loops to execute a subset of their iterations. A criticality testing phase filters out critical loops (whose perforation produces unacceptable behavior) to identify tunable loops (whose perforation produces more efficient and still acceptably accurate computations). A perforation space exploration algorithm perforates combinations of tunable loops to find Pareto-optimal perforation policies. Our results indicate that, for a range of applications, this approach typically delivers performance increases of over a factor of two (and up to a factor of seven) while changing the result that the application produces by less than 10%

    Using Code Perforation to Improve Performance, Reduce Energy Consumption, and Respond to Failures

    Get PDF
    Many modern computations (such as video and audio encoders, Monte Carlo simulations, and machine learning algorithms) are designed to trade off accuracy in return for increased performance. To date, such computations typically use ad-hoc, domain-specific techniques developed specifically for the computation at hand. We present a new general technique, code perforation, for automatically augmenting existing computations with the capability of trading off accuracy in return for performance. In contrast to existing approaches, which typically require the manual development of new algorithms, our implemented SpeedPress compiler can automatically apply code perforation to existing computations with no developer intervention whatsoever. The result is a transformed computation that can respond almost immediately to a range of increased performancedemands while keeping any resulting output distortion within acceptable user-defined bounds. We have used SpeedPress to automatically apply code perforation to applications from the PARSEC benchmark suite. The results show that the transformed applications can run as much as two to three times faster than the original applications while distorting the output by less than 10%. Because the transformed applications can operate successfully at many points in the performance/accuracy tradeoff space, they can (dynamically and on demand) navigate the tradeoff space to either maximize performance subject to a given accuracy constraint, or maximize accuracy subject to a given performance constraint. We also demonstrate the SpeedGuard runtime system which uses code perforation to enable applications to automatically adapt to challenging execution environments such as multicore machines that suffer core failures or machines that dynamically adjust the clock speed to reduce power consumption or to protect the machine from overheating
    • …
    corecore