20 research outputs found

    Identifying and analyzing pointer misuses for sophisticated memory-corruption exploit diagnosis

    Get PDF
    Software exploits are one of the major threats to internet security. To quickly respond to these attacks, it is critical to automatically diagnose such exploits and find out how they circumvent existing defense mechanisms

    DETECTION, DIAGNOSIS AND MITIGATION OF MALICIOUS JAVASCRIPT WITH ENRICHED JAVASCRIPT EXECUTIONS

    Get PDF
    Malicious JavaScript has become an important attack vector for software exploitation attacks and imposes a severe threat to computer security. In particular, three major class of problems, malware detection, exploit diagnosis, and exploits mitigation, bring considerable challenges to security researchers. Although a lot of research efforts have been made to address these threats, they have fundamental limitations and thus cannot solve the problems. Existing analysis techniques fall into two general categories: static analysis and dynamic analysis. Static analysis tends to produce inaccurate results (both false positive and false negative) and is vulnerable to a wide series of obfuscation techniques. Thus, dynamic analysis is constantly gaining popularity for exposing the typical features of malicious JavaScript. However, existing dynamic analysis techniques possess limitations such as limited code coverage and incomplete environment setup, leaving a broad attack surface for evading the detection. Once a zero-day exploit is captured, it is critical to quickly pinpoint the JavaScript statements that uniquely characterize the exploit and the payload location in the exploit. However, the current diagnosis techniques are inadequate because they approach the problem either from a JavaScript perspective and fail to account for “implicit” data flow invisible at JavaScript level, or from a binary execution perspective and fail to present the JavaScript level view of exploit. Although software vendors have deployed techniques like ASLR, sandbox, etc. to mitigate JavaScript exploits, hacking contests (e.g.,PWN2OWN, GeekPWN) have demonstrated that the latest software (e.g., Chrome, IE, Edge, Safari) can still be exploited. An ideal JavaScript exploit mitigation solution should be flexible and allow for deployment without requiring code changes. To combat malicious JavaScript, this dissertation addresses these problems through enriched executions, which explore arbitrary paths for detection, preserve JS-binary semantics for diagnosis, and perturbs memory with chaff code for mitigation. Firstly, JSForce, a forced execution engine for JavaScript, is proposed and developed to improve the detection results of current malicious JavaScript detection techniques. It drives an arbitrary JavaScript snippet to execute along different paths without any input or environment setup. While increasing code coverage, JSForce can tolerate invalid object accesses while introducing no runtime errors during execution. Secondly, JScalpel, a system that utilizes the JavaScript context information from the JavaScript level to perform context-aware binary analysis, is presented for JavaScript exploit diagnosis. In essence, it performs JS-Binary analysis to (1) generate a minimized exploit script, which in turn helps to generate a signature for the exploit, and (2) precisely locate the payload within the exploit. It replaces the malicious payload with a friendly payload and generates a PoV for the exploit. Thirdly, ChaffyScript, a vulnerability-agnostic mitigation system, is introduced to block JavaScript exploits via undermining the memory preparation stage. Specifically, given suspicious JavaScript, ChaffyScript rewrites the code to insert memory perturbation code, and then generates semantically-equivalent code. JavaScript exploits will fail as a result of unexpected memory states introduced by memory perturbation code, while the benign JavaScript still behaves as expected since the memory perturbation code does not change the JavaScript’s original semantics

    Binary Program Integrity Models for Defeating Code-Reuse Attacks

    Get PDF
    During a cyber-attack, an adversary executes offensive maneuvers to target computer systems. Particularly, an attacker often exploits a vulnerability within a program, hijacks control-flow, and executes malicious code. Data Execution Prevention (DEP), a hardware-enforced security feature, prevents an attacker from directly executing the injected malicious code. Therefore, attackers have resorted to code-reuse attacks, wherein carefully chosen fragments of code within existing code sections of a program are sequentially executed to accomplish malicious logic. Code-reuse attacks are ubiquitous and account for majority of the attacks in the wild. On one hand, due to the wide use of closed-source software, binary-level solutions are essential. On the other hand, without access to source-code and debug-information, defending raw binaries is hard. A majority of defenses against code-reuse attacks enforce control-flow integrity , a program property that requires the runtime execution of a program to adhere to a statically determined control-flow graph (CFG) -- a graph that captures the intended flow of control within the program. While defenses against code-reuse attacks have focused on reducing the attack space, due to the lack of high-level semantics in the binary, they lack in precision, which in turn results in smaller yet significant attack space. This dissertation presents program integrity models aimed at narrowing the attack space available to execute code-reuse attacks. First, we take a semantic-recovery approach to restrict the targets of indirect branches in a binary. Then, we further improve the precision by recovering C++-level semantics, and enforce a strict integrity model that improves precision for virtual function calls in the binary. Finally, in order to further reduce the attack space, we take a different perspective on defense against code-reuse attacks, and introduce Stack-Pointer Integrity -- a novel integrity model targeted at ensuring the integrity of stack pointer as opposed to the instruction pointer. Our results show that the semantic-recovery-based approaches can help in significantly reducing the attack space by improving the precision of the underlying CFG. Function-level semantic recovery can eliminate 99.47% of inaccurate targets, whereas recovering virtual callsites and VTables at a C++ level can eliminate 99.99% of inaccurate targets

    A self-healing framework for general software systems

    Get PDF
    Modern systems must guarantee high reliability, availability, and efficiency. Their complexity, exacerbated by the dynamic integration with other systems, the use of third- party services and the various different environments where they run, challenges development practices, tools and testing techniques. Testing cannot identify and remove all possible faults, thus faulty conditions may escape verification and validation activities and manifest themselves only after the system deployment. To cope with those failures, researchers have proposed the concept of self-healing systems. Such systems have the ability to examine their failures and to automatically take corrective actions. The idea is to create software systems that can integrate the knowledge that is needed to compensate for the effects of their imperfections. This knowledge is usually codified into the systems in the form of redundancy. Redundancy can be deliberately added into the systems as part of the design and the development process, as it occurs for many fault tolerance techniques. Although this kind of redundancy is widely applied, especially for safety- critical systems, it is however generally expensive to be used for common use software systems. We have some evidence that modern software systems are characterized by a different type of redundancy, which is not deliberately introduced but is naturally present due to the modern modular software design. We call it intrinsic redundancy. This thesis proposes a way to use the intrinsic redundancy of software systems to increase their reliability at a low cost. We first study the nature of the intrinsic redundancy to demonstrate that it actually exists. We then propose a way to express and encode such redundancy and an approach, Java Automatic Workaround, to exploit it automatically and at runtime to avoid system failures. Fundamentally, the Java Automatic Workaround approach replaces some failing operations with other alternative operations that are semantically equivalent in terms of the expected results and in the developer’s intent, but that they might have some syntactic difference that can ultimately overcome the failure. We qualitatively discuss the reasons of the presence of the intrinsic redundancy and we quantitatively study four large libraries to show that such redundancy is indeed a characteristic of modern software systems. We then develop the approach into a prototype and we evaluate it with four open source applications. Our studies show that the approach effectively exploits the intrinsic redundancy in avoiding failures automatically and at runtime

    TOWARDS REDESIGNING WEB BROWSERS WITH SECURITY PRINCIPLES

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Semantic discovery and reuse of business process patterns

    Get PDF
    Patterns currently play an important role in modern information systems (IS) development and their use has mainly been restricted to the design and implementation phases of the development lifecycle. Given the increasing significance of business modelling in IS development, patterns have the potential of providing a viable solution for promoting reusability of recurrent generalized models in the very early stages of development. As a statement of research-in-progress this paper focuses on business process patterns and proposes an initial methodological framework for the discovery and reuse of business process patterns within the IS development lifecycle. The framework borrows ideas from the domain engineering literature and proposes the use of semantics to drive both the discovery of patterns as well as their reuse

    Combining SOA and BPM Technologies for Cross-System Process Automation

    Get PDF
    This paper summarizes the results of an industry case study that introduced a cross-system business process automation solution based on a combination of SOA and BPM standard technologies (i.e., BPMN, BPEL, WSDL). Besides discussing major weaknesses of the existing, custom-built, solution and comparing them against experiences with the developed prototype, the paper presents a course of action for transforming the current solution into the proposed solution. This includes a general approach, consisting of four distinct steps, as well as specific action items that are to be performed for every step. The discussion also covers language and tool support and challenges arising from the transformation

    Advanced Techniques for Search-Based Program Repair

    Get PDF
    Debugging and repairing software defects costs the global economy hundreds of billions of dollars annually, and accounts for as much as 50% of programmers' time. To tackle the burgeoning expense of repair, researchers have proposed the use of novel techniques to automatically localise and repair such defects. Collectively, these techniques are referred to as automated program repair. Despite promising, early results, recent studies have demonstrated that existing automated program repair techniques are considerably less effective than previously believed. Current approaches are limited either in terms of the number and kinds of bugs they can fix, the size of patches they can produce, or the programs to which they can be applied. To become economically viable, automated program repair needs to overcome all of these limitations. Search-based repair is the only approach to program repair which may be applied to any bug or program, without assuming the existence of formal specifications. Despite its generality, current search-based techniques are restricted; they are either efficient, or capable of fixing multiple-line bugs---no existing technique is both. Furthermore, most techniques rely on the assumption that the material necessary to craft a repair already exists within the faulty program. By using existing code to craft repairs, the size of the search space is vastly reduced, compared to generating code from scratch. However, recent results, which show that almost all repairs generated by a number of search-based techniques can be explained as deletion, lead us to question whether this assumption is valid. In this thesis, we identify the challenges facing search-based program repair, and demonstrate ways of tackling them. We explore if and how the knowledge of candidate patch evaluations can be used to locate the source of bugs. We use software repository mining techniques to discover the form of a better repair model capable of addressing a greater number of bugs. We conduct a theoretical and empirical analysis of existing search algorithms for repair, before demonstrating a more effective alternative, inspired by greedy algorithms. To ensure reproducibility, we propose and use a methodology for conducting high-quality automated program research. Finally, we assess our progress towards solving the challenges of search-based program repair, and reflect on the future of the field

    The drivers of Corporate Social Responsibility in the supply chain. A case study.

    Get PDF
    Purpose: The paper studies the way in which a SME integrates CSR into its corporate strategy, the practices it puts in place and how its CSR strategies reflect on its suppliers and customers relations. Methodology/Research limitations: A qualitative case study methodology is used. The use of a single case study limits the generalizing capacity of these findings. Findings: The entrepreneur’s ethical beliefs and value system play a fundamental role in shaping sustainable corporate strategy. Furthermore, the type of competitive strategy selected based on innovation, quality and responsibility clearly emerges both in terms of well defined management procedures and supply chain relations as a whole aimed at involving partners in the process of sustainable innovation. Originality/value: The paper presents a SME that has devised an original innovative business model. The study pivots on the issues of innovation and eco-sustainability in a context of drivers for CRS and business ethics. These values are considered fundamental at International level; the United Nations has declared 2011 the “International Year of Forestry”

    Manager’s and citizen’s perspective of positive and negative risks for small probabilities

    Get PDF
    So far „risk‟ has been mostly defined as the expected value of a loss, mathematically PL, being P the probability of an adverse event and L the loss incurred as a consequence of the event. The so called risk matrix is based on this definition. Also for favorable events one usually refers to the expected gain PG, being G the gain incurred as a consequence of the positive event. These “measures” are generally violated in practice. The case of insurances (on the side of losses, negative risk) and the case of lotteries (on the side of gains, positive risk) are the most obvious. In these cases a single person is available to pay a higher price than that stated by the mathematical expected value, according to (more or less theoretically justified) measures. The higher the risk, the higher the unfair accepted price. The definition of risk as expected value is justified in a long term “manager‟s” perspective, in which it is conceivable to distribute the effects of an adverse event on a large number of subjects or a large number of recurrences. In other words, this definition is mostly justified on frequentist terms. Moreover, according to this definition, in two extreme situations (high-probability/low-consequence and low-probability/high-consequence), the estimated risk is low. This logic is against the principles of sustainability and continuous improvement, which should impose instead both a continuous search for lower probabilities of adverse events (higher and higher reliability) and a continuous search for lower impact of adverse events (in accordance with the fail-safe principle). In this work a different definition of risk is proposed, which stems from the idea of safeguard: (1Risk)=(1P)(1L). According to this definition, the risk levels can be considered low only when both the probability of the adverse event and the loss are small. Such perspective, in which the calculation of safeguard is privileged to the calculation of risk, would possibly avoid exposing the Society to catastrophic consequences, sometimes due to wrong or oversimplified use of probabilistic models. Therefore, it can be seen as the citizen‟s perspective to the definition of risk
    corecore