178 research outputs found

    Enabling Program Analysis Through Deterministic Replay and Optimistic Hybrid Analysis

    Full text link
    As software continues to evolve, software systems increase in complexity. With software systems composed of many distinct but interacting components, today’s system programmers, users, and administrators find themselves requiring automated ways to find, understand, and handle system mis-behavior. Recent information breaches such as the Equifax breach of 2017, and the Heartbleed vulnerability of 2014 show the need to understand and debug prior states of computer systems. In this thesis I focus on enabling practical entire-system retroactive analysis, allowing programmers, users, and system administrators to diagnose and understand the impact of these devastating mishaps. I focus primarly on two techniques. First, I discuss a novel deterministic record and replay system which enables fast, practical recollection of entire systems of computer state. Second, I discuss optimistic hybrid analysis, a novel optimization method capable of dramatically accelerating retroactive program analysis. Record and replay systems greatly aid in solving a variety of problems, such as fault tolerance, forensic analysis, and information providence. These solutions, however, assume ubiquitous recording of any application which may have a problem. Current record and replay systems are forced to trade-off between disk space and replay speed. This trade-off has historically made it impractical to both record and replay large histories of system level computation. I present Arnold, a novel record and replay system which efficiently records years of computation on a commodity hard-drive, and can efficiently replay any recorded information. Arnold combines caching with a unique process-group granularity of recording to produce both small, and quickly recalled recordings. My experiments show that under a desktop workload, Arnold could store 4 years of computation on a commodity 4TB hard drive. Dynamic analysis is used to retroactively identify and address many forms of system mis-behaviors including: programming errors, data-races, private information leakage, and memory errors. Unfortunately, the runtime overhead of dynamic analysis has precluded its adoption in many instances. I present a new dynamic analysis methodology called optimistic hybrid analysis (OHA). OHA uses knowledge of the past to predict program behaviors in the future. These predictions, or likely invariants are speculatively assumed true by a static analysis. This creates a static analysis which can be far more accurate than its traditional counterpart. Once this predicated static analysis is created, it is speculatively used to optimize a final dynamic analysis, creating a far more efficient dynamic analysis than otherwise possible. I demonstrate the effectiveness of OHA by creating an optimistic hybrid backward slicer, OptSlice, and optimistic data-race detector OptFT. OptSlice and OptFT are just as accurate as their traditional hybrid counterparts, but run on average 8.3x and 1.6x faster respectively. In this thesis I demonstrate that Arnold’s ability to record and replay entire computer systems, combined with optimistic hybrid analysis’s ability to quickly analyze prior computation, enable a practical and useful entire system retroactive analysis that has been previously unrealized.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/144052/1/ddevec_1.pd

    Improving Software Reliability for Event-Driven Mobile Systems

    Full text link
    Mobile platforms commonly support an event-driven model of concurrent programming. In an event-driven system, the flow of a program is controlled by asynchronous events. Events processed sequentially in the same thread can be logically concurrent to each other, as they may not be ordered by any programmer-specified ordering operations. The lack of programmer-defined order between multiple non-commutative concurrent events --- that is, only certain execution orders between these events yields correct results --- leads to a unique class of concurrency bugs in event-driven programs. Unfortunately, the state of the art for detecting concurrency errors in event-driven systems is significantly weaker than that in traditional thread-based systems. This thesis aims to fill this important gap by developing models, algorithms and tools that aid programmers to analyze and diagnose event-driven programs to improve software reliability. Specifically, this thesis presents the following three techniques to detect concurrency errors in event-driven programs: 1. A new causality model for event-driven program is defined to infer ordering invariants between events across different executions. 2. An efficient and scalable single-pass algorithm to identify concurrent asynchronous events that may lead to concurrency errors. 3. A statistical commutativity analysis to find likely non-commutative events that contains concurrency bugs. The techniques we have developed are broadly applicable to many event-driven platforms. To translate our techniques into real-world impact, we develop a set of tools in the context of Android to help build up a more robust and reliable platform for event-driven computing.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/138565/1/chhsiao_1.pd

    Faster and More Precise Pointer Analysis Algorithms for Automatic Bug Detection

    Get PDF
    Pointer Analysis is a fundamental technique with enormous applications, such as value-flow analysis, bug detection, etc. It is also a prerequisite of many compiler optimizations. However, despite decades of research, the scalability and precision of pointer analysis remain to be an open question. In this dissertation, I introduce my research effort to apply pointer analysis to detect vulnerabilities in software and more importantly, to design and implement a faster and more precise pointer analysis algorithm. In this dissertation, I present my works on improving both the precision and the performance of inclusion-based pointer analysis. I proposed two fundamental algorithms, origin-sensitive pointer analysis and partial update solver (PUS), and show their practicality by building two tools, O2 and XRust, on top of them. Origin-sensitive pointer analysis unifies widely-used concurrent pro-gramming models: events and threads, and analyzes data sharing (which is essential for static data race detection) with thread/event spawning sites as the context. PUS, a new solving algorithm for inclusion-based pointer analysis, advances the state-of-the-art by operating on a small subgraph of the entire points-to constraint graph at each iteration while still guaranteeing correctness. Our experimental results show that PUS is 2x faster in solving context-insensitive points-to constraints and 7x faster in solving context-sensitive constraints. Meanwhile, the tool, O2, that is backed by origin-sensitive pointer analysis was able to detect many previously unknown data races in real-world applications including Linux, Redis, memcached, etc; XRust can also isolate memory errors in unsafe Rust from safe Rust utilizing data sharing information computed by pointer analysis with negligible overhead

    Active Learning of Points-To Specifications

    Full text link
    When analyzing programs, large libraries pose significant challenges to static points-to analysis. A popular solution is to have a human analyst provide points-to specifications that summarize relevant behaviors of library code, which can substantially improve precision and handle missing code such as native code. We propose ATLAS, a tool that automatically infers points-to specifications. ATLAS synthesizes unit tests that exercise the library code, and then infers points-to specifications based on observations from these executions. ATLAS automatically infers specifications for the Java standard library, and produces better results for a client static information flow analysis on a benchmark of 46 Android apps compared to using existing handwritten specifications

    Decompose and Conquer: Addressing Evasive Errors in Systems on Chip

    Full text link
    Modern computer chips comprise many components, including microprocessor cores, memory modules, on-chip networks, and accelerators. Such system-on-chip (SoC) designs are deployed in a variety of computing devices: from internet-of-things, to smartphones, to personal computers, to data centers. In this dissertation, we discuss evasive errors in SoC designs and how these errors can be addressed efficiently. In particular, we focus on two types of errors: design bugs and permanent faults. Design bugs originate from the limited amount of time allowed for design verification and validation. Thus, they are often found in functional features that are rarely activated. Complete functional verification, which can eliminate design bugs, is extremely time-consuming, thus impractical in modern complex SoC designs. Permanent faults are caused by failures of fragile transistors in nano-scale semiconductor manufacturing processes. Indeed, weak transistors may wear out unexpectedly within the lifespan of the design. Hardware structures that reduce the occurrence of permanent faults incur significant silicon area or performance overheads, thus they are infeasible for most cost-sensitive SoC designs. To tackle and overcome these evasive errors efficiently, we propose to leverage the principle of decomposition to lower the complexity of the software analysis or the hardware structures involved. To this end, we present several decomposition techniques, specific to major SoC components. We first focus on microprocessor cores, by presenting a lightweight bug-masking analysis that decomposes a program into individual instructions to identify if a design bug would be masked by the program's execution. We then move to memory subsystems: there, we offer an efficient memory consistency testing framework to detect buggy memory-ordering behaviors, which decomposes the memory-ordering graph into small components based on incremental differences. We also propose a microarchitectural patching solution for memory subsystem bugs, which augments each core node with a small distributed programmable logic, instead of including a global patching module. In the context of on-chip networks, we propose two routing reconfiguration algorithms that bypass faulty network resources. The first computes short-term routes in a distributed fashion, localized to the fault region. The second decomposes application-aware routing computation into simple routing rules so to quickly find deadlock-free, application-optimized routes in a fault-ridden network. Finally, we consider general accelerator modules in SoC designs. When a system includes many accelerators, there are a variety of interactions among them that must be verified to catch buggy interactions. To this end, we decompose such inter-module communication into basic interaction elements, which can be reassembled into new, interesting tests. Overall, we show that the decomposition of complex software algorithms and hardware structures can significantly reduce overheads: up to three orders of magnitude in the bug-masking analysis and the application-aware routing, approximately 50 times in the routing reconfiguration latency, and 5 times on average in the memory-ordering graph checking. These overhead reductions come with losses in error coverage: 23% undetected bug-masking incidents, 39% non-patchable memory bugs, and occasionally we overlook rare patterns of multiple faults. In this dissertation, we discuss the ideas and their trade-offs, and present future research directions.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/147637/1/doowon_1.pd

    Formalization and Detection of Host-Based Code Injection Attacks in the Context of Malware

    Get PDF
    The Host-Based Code Injection Attack (HBCIAs) is a technique that malicious software utilizes in order to avoid detection or steal sensitive information. In a nutshell, this is a local attack where code is injected across process boundaries and executed in the context of a victim process. Malware employs HBCIAs on several operating systems including Windows, Linux, and macOS. This thesis investigates the topic of HBCIAs in the context of malware. First, we conduct basic research on this topic. We formalize HBCIAs in the context of malware and show in several measurements, amongst others, the high prevelance of HBCIA-utilizing malware. Second, we present Bee Master, a platform-independent approach to dynamically detect HBCIAs. This approach applies the honeypot paradigm to operating system processes. Bee Master deploys fake processes as honeypots, which are attacked by malicious software. We show that Bee Master reliably detects HBCIAs on Windows and Linux. Third, we present Quincy, a machine learning-based system to detect HBCIAs in post-mortem memory dumps. It utilizes up to 38 features including memory region sparseness, memory region protection, and the occurence of HBCIA-related strings. We evaluate Quincy with two contemporary detection systems called Malfind and Hollowfind. This evaluation shows that Quincy outperforms them both. It is able to increase the detection performance by more than eight percent

    Computer Aided Verification

    Get PDF
    This open access two-volume set LNCS 10980 and 10981 constitutes the refereed proceedings of the 30th International Conference on Computer Aided Verification, CAV 2018, held in Oxford, UK, in July 2018. The 52 full and 13 tool papers presented together with 3 invited papers and 2 tutorials were carefully reviewed and selected from 215 submissions. The papers cover a wide range of topics and techniques, from algorithmic and logical foundations of verification to practical applications in distributed, networked, cyber-physical, and autonomous systems. They are organized in topical sections on model checking, program analysis using polyhedra, synthesis, learning, runtime verification, hybrid and timed systems, tools, probabilistic systems, static analysis, theory and security, SAT, SMT and decisions procedures, concurrency, and CPS, hardware, industrial applications

    User-centered Program Analysis Tools

    Get PDF
    The research and industrial communities have made great strides in developing advanced software defect detection tools based on program analysis. Most of the work in this area has focused on developing novel program analysis algorithms to find bugs more efficiently or accurately, or to find more sophisticated kinds of bugs. However, the focus on algorithms often leads to tools that are complex and difficult to actually use to debug programs. We believe that we can design better, more useful program analysis tools by taking a user-centered approach. In this dissertation, we present three possible elements of such an approach. First, we improve the user interface by designing Path Projection, a toolkit for visualizing program paths, such as call stacks, that are commonly used to explain errors. We evaluated Path Projection in a user study and found that programmers were able to verify error reports more quickly with similar accuracy, and strongly preferred Path Projection to a standard code viewer. Second, we make it easier for programmers to combine different algorithms to customize the precision or efficiency of a tool for their target programs. We designed Mix, a framework that allows programmers to apply either type checking, which is fast but imprecise, or symbolic execution, which is precise but slow, to different parts of their programs. Mix keeps its design simple by making no modifications to the constituent analyses. Instead, programmers use Mix annotations to mark blocks of code that should be typed checked or symbolically executed, and Mix automatically combines the results. We evaluated the effectiveness of Mix by implementing a prototype called Mixy for C and using it to check for null pointer errors in vsftpd. Finally, we integrate program analysis more directly into the debugging process. We designed Expositor, an interactive dynamic program analysis and debugging environment built on top of scripting and time-travel debugging. In Expositor, programmers write program analyses as scripts that analyze entire program executions, using list-like operations such as map and filter to manipulate execution traces. For efficiency, Expositor uses lazy data structures throughout its implementation to compute results on-demand, enabling a more interactive user experience. We developed a prototype of Expositor using GDB and UndoDB, and used it to debug a stack overflow and to unravel a subtle data race in Firefox
    • …
    corecore