171 research outputs found

    Enhancing dynamic symbolic execution via loop summarisation, segmented memory and pending constraints

    Get PDF
    Software has become ubiquitous and its impact is still increasing. The more software is created, the more bugs get introduced into it. With software’s increasing omnipresence, these bugs have a high probability of negative impact on everyday life. There are many efforts aimed at improving software correctness, among which symbolic execution, a program analysis technique that aims to systematically explore all program paths. In this thesis we present three techniques for enhancing symbolic execution. We first present a counterexample-guided inductive synthesis approach to summarise a class of loops, called memoryless loops using standard library functions. Our approach can summarize two thirds of memoryless loops we gathered on a set of open-source programs. These loop summaries can be used to: 1) enhance symbolic execution, 2) optimise native code and 3) refactor code. We then propose a technique that avoids expensive forking by using a segmented memory model. In this model, we split memory into segments using pointer alias analysis, so that each symbolic pointer refers to objects in a single segment. This results in a memory model where forking due to symbolic pointer dereferences is reduced. We evaluate our segmented memory model on benchmarks such as SQLite, m4 and make and observe significant decreases in execution time and memory usage. Finally, we present pending constraints, which can enhance scalability of symbolic execution by aggressively prioritising execution paths that are already known to be feasible either via cached solver solutions or seeds. The execution of other paths is deferred until no paths are known to be feasible without using the constraint solver. We evaluate our technique on nine applications, including SQLite3, make and tcpdump, and show it can achieve higher coverage for both seeded and non-seeded exploration.Open Acces

    Concolic Execution Tailored for Hybrid Fuzzing

    Get PDF
    Recently, hybrid fuzzing, which combines fuzzing and concolic execution, has been highlighted to overcome limitations of both techniques. Despite its success in contrived programs such as DARPA Cyber Grand Challenge (CGC), it still falls short in finding bugs in real-world software due to its low performance of existing concolic executors. To address this issue, this dissertation suggests and demonstrates concolic execution tailored for hybrid fuzzing with two systems: QSYM and Hybridra. First, we present QSYM, a binary-only concolic executor tailored for hybrid fuzzing. It significantly improves the performance of conventional concolic executors by removing redundant symbolic emulations for a binary. Moreover, to efficiently produce test cases for fuzzing, even sacrificing its soundness, QSYM introduces two key techniques: optimistic solving and basic block pruning. As a result, QSYM outperforms state-of-the-art fuzzers, and, more importantly, it found 13 new bugs in eight real-world programs, including file, ffmpeg, and OpenJPEG. Enhancing the key idea of QSYM, we discuss Hybridra, a new concolic executor for file systems. To apply hybrid fuzzing for file systems, which are gigantic and convoluted, Hybridra employs compilation-based concolic execution to boost concolic execution leveraging the existing of source code. Moreover, Hybridra introduces a new technique called staged reduction, which combines existing heuristics to efficiently generate test cases for file systems. Consequently, Hybridra outperforms a state-of-the-art file system fuzzer, Hydra, by achieving higher code coverage, and successfully discovered four new bugs in btrfs, which has been heavily tested by other fuzzers.Ph.D

    CACEE: Context Aware Concolic Execution Engine for Malware Analysis

    Get PDF
    An emerging pattern in malware is the use of public web services for command andcontrol (C&C) infrastructure. This new trend, combined with the short lifespan of malwarein the wild, makes extracting behaviors from malware in an automated fashion a difficultproblem. The Context-Aware Concolic Execution Engine (CACEE) is a tool designed torecreate the original execution context, forcing Windows 32-bit malware to execute theirpayloads as if they were still operational. CACEE monitors the flow of data as the payloadexecutes, and uses this information to synthesize the behaviors the malware exhibits. Threemalware case studies that abuse public web services are analyzed with CACEE, and theresults are compared against manual reverse engineering.M.S

    Hybrid Differential Software Testing

    Get PDF
    Differentielles Testen ist ein wichtiger Bestandteil der Qualitätssicherung von Software, mit dem Ziel Testeingaben zu generieren, die Unterschiede im Verhalten der Software deutlich machen. Solche Unterschiede können zwischen zwei Ausführungspfaden (1) in unterschiedlichen Programmversionen, aber auch (2) im selben Programm auftreten. In dem ersten Fall werden unterschiedliche Programmversionen mit der gleichen Eingabe untersucht, während bei dem zweiten Fall das gleiche Programm mit unterschiedlichen Eingaben analysiert wird. Die Regressionsanalyse, die Side-Channel Analyse, das Maximieren der Ausführungskosten eines Programms und die Robustheitsanalyse von Neuralen Netzwerken sind typische Beispiele für differentielle Softwareanalysen. Eine besondere Herausforderung liegt in der effizienten Analyse von mehreren Programmpfaden (auch über mehrere Programmvarianten hinweg). Die existierenden Ansätze sind dabei meist nicht (spezifisch) dafür konstruiert, unterschiedliches Verhalten präzise hervorzurufen oder sind auf einen Teil des Suchraums limitiert. Diese Arbeit führt das Konzept des hybriden differentiellen Software Testens (HyDiff) ein: eine hybride Analysetechnik für die Generierung von Eingaben zur Erkennung von semantischen Unterschieden in Software. HyDiff besteht aus zwei parallel laufenden Komponenten: (1) einem such-basierten Ansatz, der effizient Eingaben generiert und (2) einer systematischen Analyse, die auch komplexes Programmverhalten erreichen kann. Die such-basierte Komponente verwendet Fuzzing geleitet durch differentielle Heuristiken. Die systematische Analyse basiert auf Dynamic Symbolic Execution, das konkrete Eingaben bei der Analyse integrieren kann. HyDiff wird anhand mehrerer Experimente evaluiert, die in spezifischen Anwendungen im Bereich des differentiellen Testens ausgeführt werden. Die Resultate zeigen eine effektive Generierung von Testeingaben durch HyDiff, wobei es sich signifikant besser als die einzelnen Komponenten verhält.Differential software testing is important for software quality assurance as it aims to automatically generate test inputs that reveal behavioral differences in software. The concrete analysis procedure depends on the targeted result: differential testing can reveal divergences between two execution paths (1) of different program versions or (2) within the same program. The first analysis type would execute different program versions with the same input, while the second type would execute the same program with different inputs. Therefore, detecting regression bugs in software evolution, analyzing side-channels in programs, maximizing the execution cost of a program over multiple executions, and evaluating the robustness of neural networks are instances of differential software analysis with the goal to generate diverging executions of program paths. The key challenge of differential software testing is to simultaneously reason about multiple program paths, often across program variants, in an efficient way. Existing work in differential testing is often not (specifically) directed to reveal a different behavior or is limited to a subset of the search space. This PhD thesis proposes the concept of Hybrid Differential Software Testing (HyDiff) as a hybrid analysis technique to generate difference revealing inputs. HyDiff consists of two components that operate in a parallel setup: (1) a search-based technique that inexpensively generates inputs and (2) a systematic exploration technique to also exercise deeper program behaviors. HyDiff’s search-based component uses differential fuzzing directed by differential heuristics. HyDiff’s systematic exploration component is based on differential dynamic symbolic execution that allows to incorporate concrete inputs in its analysis. HyDiff is evaluated experimentally with applications specific for differential testing. The results show that HyDiff is effective in all considered categories and outperforms its components in isolation

    Improving Scalability of Symbolic Execution for Software with Complex Environment Interfaces

    Get PDF
    Manual software testing is laborious and prone to human error. Yet, among practitioners, it is the most popular method for quality assurance. Automating the test case generation promises better effectiveness, especially for exposing corner-case bugs. Symbolic execution stands out as an automated testing technique that has no false positives, it eventually enumerates all feasible program executions, and can prioritize executions of interest. However, path explosionâthe fact that the number of program executions is typically at least exponential in the size of the programâhinders the applicability of symbolic execution in the real world, where software commonly reaches millions of lines of code. In practice, large systems can be efficiently executed symbolically by exploiting their modularity and thus symbolically execute the different parts of the system separately. However, a component typically depends on its environment to perform its task. Thus, a symbolic execution engine needs to provide an environment interface that is efficient, while maintaining accuracy and completeness. This conundrum is known as the environment problem. Systematically addressing the environment problem is challenging, as its instantiation depends on the nature of the environment and its interface. This thesis addresses two instances of the environment problem in symbolic execution, which are at opposite ends of the spectrum of interface stability: (1) system software interacting with an operating system with stable and well-documented semantics (e.g., POSIX), and (2) high-level programs written in dynamic languages, such as Python, Ruby, or JavaScript, whose semantics and interfaces are continuously evolving. To address the environment problem for stable operating system interfaces, this thesis introduces the idea of splitting an operating system model into a core set of primitives built into the engine at host level and, on top of it, the full operating system interface emulated inside the guest. As few as two primitives are sufficient to support a complex interface such as POSIX: threads with synchronization and address spaces with shared memory. We prototyped this idea in the Cloud9 symbolic execution platform. Cloud9's accurate and efficient POSIX model exposes hard-to-reproduce bugs in systems such as UNIX utilities, web servers, and distributed systems. Cloud9 is available at http://cloud9.epfl.ch. For programs written in high-level interpreted languages, this thesis introduces the idea of using the language interpreter as an "executable language specification". The interpreter runs inside a low-level (e.g., x86) symbolic execution engine while it executes the target program. The aggregate system acts as a high-level symbolic execution engine for the program. To manage the complexity of symbolically executing the entire interpreter, this thesis introduces Class-Uniform Path Analysis (CUPA), an algorithm for prioritizing paths that groups paths into equivalence classes according to a coverage goal. We built a prototype of these ideas in the form of Chef, a symbolic execution platform for interpreted languages that generates up to 1000 times more tests in popular Python and Lua packages compared to a plain execution of the interpreters. Chef is available at http://dslab.epfl.ch/proj/chef/

    Optimisation of LHCb Applications for Multi- and Manycore Job Submission

    Get PDF
    Nowadays, the Worldwide LHC Computing Grid mainly consists of multi- and manycore processors. The thesis investigates how such resources can be used more efficiently at the example of the LHCb experiment. It analyses how to improve software in terms of memory requirements and concurrency. The research involves the implementation of a moldable job scheduler and a supervised learning algorithm which helps to better predict LHCb workloads

    Efficient runtime systems for speculative parallelization

    Get PDF
    Manuelle Parallelisierung ist zeitaufwändig und fehleranfällig. Automatische Parallelisierung andererseits findet häufig nur einen Bruchteil der verfügbaren Parallelität. Mithilfe von Spekulation kann jedoch auch für komplexere Programme ein Großteil der Parallelität ausgenutzt werden. Spekulativ parallelisierte Programme benötigen zur Ausführung immer ein Laufzeitsystem, um die spekulativen Annahmen abzusichern und für den Fall des Nichtzutreffens die korrekte Ausführungssemantik sicherzustellen. Solche Laufzeitsysteme sollen die Ausführungszeit des parallelen Programms so wenig wie möglich beeinflussen. In dieser Arbeit untersuchen wir, inwiefern aktuelle Systeme, die Speicherzugriffe explizit und in Software beobachten, diese Anforderung erfüllen, und stellen Änderungen vor, die die Laufzeit massiv verbessern. Außerdem entwerfen wir zwei neue Systeme, die mithilfe von virtueller Speicherverwaltung das Programm indirekt beobachten und dadurch eine deutlich geringere Auswirkung auf die Laufzeit haben. Eines der vorgestellten Systeme ist mittels eines Moduls direkt in den Linux-Betriebssystemkern integriert und bietet so die bestmögliche Effizienz. Darüber hinaus bietet es weitreichendere Sicherheitsgarantien als alle bisherigen Techniken, indem sogar Systemaufrufe zum Beispiel zur Datei Ein- und Ausgabe in der spekulativen Isolation mit eingeschlossen sind. Wir zeigen an einer Reihe von Benchmarks die Überlegenheit unserer Spekulationssyteme über den derzeitigen Stand der Technik. Sämtliche unserer Erweiterungen und Neuentwicklungen stehen als open source zur freien Verfügung. Diese Arbeit ist in englischer Sprache verfasst.Manual parallelization is time consuming and error-prone. Automatic parallelization on the other hand is often unable to extract substantial parallelism. Using speculation, however, most of the parallelism can be exploited even of complex programs. Speculatively parallelized programs always need a runtime system during execution in order to ensure the validity of the speculative assumptions, and to ensure the correct semantics even in the case of misspeculation. These runtime systems should influence the execution time of the parallel program as little as possible. In this thesis, we investigate to which extend state-of-the-art systems which track memory accesses explicitly in software fulfill this requirement. We describe and implement changes which improve their performance substantially. We also design two new systems utilizing virtual memory abstraction to track memory changed implicitly, thus causing less overhead during execution. One of the new systems is integrated into the Linux kernel as a kernel module, providing the best possible performance. Furthermore it provides stronger soundness guarantees than any state-of-the-art system by also capturing system calls, hence including for example file I/O into speculative isolation. In a number of benchmarks we show the performance improvements of our virtual memory based systems over the state of the art. All our extensions and newly developed speculation systems are made available as open source

    Investigating tools and techniques for improving software performance on multiprocessor computer systems

    Get PDF
    The availability of modern commodity multicore processors and multiprocessor computer systems has resulted in the widespread adoption of parallel computers in a variety of environments, ranging from the home to workstation and server environments in particular. Unfortunately, parallel programming is harder and requires more expertise than the traditional sequential programming model. The variety of tools and parallel programming models available to the programmer further complicates the issue. The primary goal of this research was to identify and describe a selection of parallel programming tools and techniques to aid novice parallel programmers in the process of developing efficient parallel C/C++ programs for the Linux platform. This was achieved by highlighting and describing the key concepts and hardware factors that affect parallel programming, providing a brief survey of commonly available software development tools and parallel programming models and libraries, and presenting structured approaches to software performance tuning and parallel programming. Finally, the performance of several parallel programming models and libraries was investigated, along with the programming effort required to implement solutions using the respective models. A quantitative research methodology was applied to the investigation of the performance and programming effort associated with the selected parallel programming models and libraries, which included automatic parallelisation by the compiler, Boost Threads, Cilk Plus, OpenMP, POSIX threads (Pthreads), and Threading Building Blocks (TBB). Additionally, the performance of the GNU C/C++ and Intel C/C++ compilers was examined. The results revealed that the choice of parallel programming model or library is dependent on the type of problem being solved and that there is no overall best choice for all classes of problem. However, the results also indicate that parallel programming models with higher levels of abstraction require less programming effort and provide similar performance compared to explicit threading models. The principle conclusion was that the problem analysis and parallel design are an important factor in the selection of the parallel programming model and tools, but that models with higher levels of abstractions, such as OpenMP and Threading Building Blocks, are favoured
    • …
    corecore