35 research outputs found

    Software Testing and Metrics for Concurrent Computation

    Get PDF
    [[abstract]]Verification and validation are two important technologies to assure the reliability and quality of software. Software testing and metrics are two approaches to execute the verification and validation. In sequential computation, a fairly mature process exists, with various methodologies and tools available for use in building and demonstrating the correctness of a program being tested. The emergence of concurrent computation in recent years, however, introduces new testing problems and difficulties that cannot be solved by the traditional sequential program testing techniques. Many concurrent program testing methodologies have been proposed to solve controlled execution and determinism. There have been few discussions of concurrent software testing from the inter-task viewpoint, even though the common characteristics of concurrent programming are the explicit identification of the large-grain parallel computation units (tasks) and the explicit inter-task communication via a rendezvous-style mechanism. In this paper, we focus on testing concurrent programs through task decomposition. We propose four testing criteria to test a concurrent program. A programmer can choose an appropriate testing strategy depending on the properties of the concurrent programs. Associated with the strategies, four equations are provided to measure the complexity of concurrent programs[[conferencetype]]國際[[conferencedate]]19961204~19961204[[conferencelocation]]Seoul, Kore

    Selective Dynamic Analysis of Virtualized Whole-System Guest Environments

    Get PDF
    Dynamic binary analysis is a prevalent and indispensable technique in program analysis. While several dynamic binary analysis tools and frameworks have been proposed, all suffer from one or more of: prohibitive performance degradation, a semantic gap between the analysis code and the execution under analysis, architecture/OS specificity, being user-mode only, and lacking flexibility and extendability. This dissertation describes the design of the Dynamic Executable Code Analysis Framework (DECAF), a virtual machine-based, multi-target, whole-system dynamic binary analysis framework. In short, DECAF seeks to address the shortcomings of existing whole-system dynamic analysis tools and extend the state of the art by utilizing a combination of novel techniques to provide rich analysis functionality without crippling amounts of execution overhead. DECAF extends the mature QEMU whole-system emulator, a type-2 hypervisor capable of emulating every instruction that executes within a complete guest system environment. DECAF provides a novel, hardware event-based method of just-in-time virtual machine introspection (VMI) to address the semantic gap problem. It also implements a novel instruction-level taint tracking engine at bitwise level of granularity, ensuring that taint propagation is sound and highly precise throughout the guest environment. A formal analysis of the taint propagation rules is provided to verify that most instructions introduce neither false positives nor false negatives. DECAF’s design also provides a plugin architecture with a simple-to-use, event-driven programming interface that makes it both flexible and extendable for a variety of analysis tasks. The implementation of DECAF consists of 9550 lines of C++ code and 10270 lines of C code. Its performance is evaluated using CPU2006 SPEC benchmarks, which show an average overhead of 605% for system wide tainting and 12% for VMI. Three platformneutral DECAF plugins - Instruction Tracer, Keylogger Detector, and API Tracer - are described and evaluated in this dissertation to demonstrate the ease of use and effectiveness of DECAF in writing cross-platform and system-wide analysis tools. This dissertation also presents the Virtual Device Fuzzer (VDF), a scalable fuzz testing framework for discovering bugs within the virtual devices implemented as part of QEMU. Such bugs could be used by malicious software executing within a guest under analysis by DECAF, so the discovery, reproduction, and diagnosis of such bugs helps to protect DECAF against attack while improving QEMU and any analysis platforms built upon QEMU. VDF uses selective instrumentation to perform targeted fuzz testing, which explores only the branches of execution belonging to virtual devices under analysis. By leveraging record and replay of memory-mapped I/O activity, VDF quickly cycles virtual devices through an arbitrarily large number of states without requiring a guest OS to be booted or present. Once a test case is discovered that triggers a bug, VDF reduces the test case to the minimum number of reads/writes required to trigger the bug and generates source code suitable for reproducing the bug during debugging and analysis. VDF is evaluated by fuzz testing eighteen QEMU virtual devices, generating 1014 crash or hang test cases that reveal bugs in six of the tested devices. Over 80% of the crashes and hangs were discovered within the first day of testing. VDF covered an average of 62.32% of virtual device branches during testing, and the average test case was minimized to a reproduction test case only 18.57% of its original size

    Doctor of Philosophy

    Get PDF
    dissertationA modern software system is a composition of parts that are themselves highly complex: operating systems, middleware, libraries, servers, and so on. In principle, compositionality of interfaces means that we can understand any given module independently of the internal workings of other parts. In practice, however, abstractions are leaky, and with every generation, modern software systems grow in complexity. Traditional ways of understanding failures, explaining anomalous executions, and analyzing performance are reaching their limits in the face of emergent behavior, unrepeatability, cross-component execution, software aging, and adversarial changes to the system at run time. Deterministic systems analysis has a potential to change the way we analyze and debug software systems. Recorded once, the execution of the system becomes an independent artifact, which can be analyzed offline. The availability of the complete system state, the guaranteed behavior of re-execution, and the absence of limitations on the run-time complexity of analysis collectively enable the deep, iterative, and automatic exploration of the dynamic properties of the system. This work creates a foundation for making deterministic replay a ubiquitous system analysis tool. It defines design and engineering principles for building fast and practical replay machines capable of capturing complete execution of the entire operating system with an overhead of several percents, on a realistic workload, and with minimal installation costs. To enable an intuitive interface of constructing replay analysis tools, this work implements a powerful virtual machine introspection layer that enables an analysis algorithm to be programmed against the state of the recorded system through familiar terms of source-level variable and type names. To support performance analysis, the replay engine provides a faithful performance model of the original execution during replay

    Transparent and Precise Malware Analysis Using Virtualization: From Theory to Practice

    Get PDF
    Dynamic analysis is an important technique used in malware analysis and is complementary to static analysis. Thus far, virtualization has been widely adopted for building fine-grained dynamic analysis tools and this trend is expected to continue. Unlike User/Kernel space malware analysis platforms that essentially co-exist with malware, virtualization based platforms benefit from isolation and fine-grained instrumentation support. Isolation makes it more difficult for malware samples to disrupt analysis and fine-grained instrumentation provides analysts with low level details, such as those at the machine instruction level. This in turn supports the development of advanced analysis tools such as dynamic taint analysis and symbolic execution for automatic path exploration. The major disadvantage of virtualization based malware analysis is the loss of semantic information, also known as the semantic gap problem. To put it differently, since analysis takes place at the virtual machine monitor where only the raw system state (e.g., CPU and memory) is visible, higher level constructs such as processes and files must be reconstructed using the low level information. The collection of techniques used to bridge semantic gaps is known as Virtual Machine Introspection. Virtualization based analysis platforms can be further separated into emulation and hardware virtualization. Emulators have the advantages of flexibility of analysis tool development and efficiency for fine-grained analysis; however, emulators suffer from the transparency problem. That is, malware can employ methods to determine whether it is executing in an emulated environment versus real hardware and cease operations to disrupt analysis if the machine is emulated. In brief, emulation based dynamic analysis has advantages over User/Kernel space and hardware virtualization based techniques, but it suffers from semantic gap and transparency problems. These problems have been exacerbated by recent discoveries of anti-emulation malware that detects emulators and Android malware with two semantic gaps, Java and native. Also, it is foreseeable that malware authors will have a similar response to taint analysis. In other words, once taint analysis becomes widely used to understand how malware operates, the authors will create new malware that attacks the imprecisions in taint analysis implementations and induce false-positives and false-negatives in an effort to frustrate analysts. This dissertation addresses these problems by presenting concepts, methods and techniques that can be used to transparently and precisely analyze both desktop and mobile malware using virtualization. This is achieved in three parts. First, precise heterogeneous record and replay is presented as a means to help emulators benefit from the transparency characteristics of hardware virtualization. This technique is implemented in a tool called V2E that uses KVM for recording and TEMU for replaying and analysis. It was successfully used to analyze real-world anti-emulation malware that evaded analysis using TEMU alone. Second, the design of an emulation based Android malware analysis platform that uses virtual machine introspection to bridge both the Java and native level semantic gaps as well as seamlessly bind the two views together into a single view is presented. The core introspection and instrumentation techniques were implemented in a new analysis platform called DroidScope that is based on the Android emulator. It was successfully used to analyze two real-world Android malware samples that have cooperating Java and native level components. Taint analysis was also used to study their information ex-filtration behaviors. Third, formal methods for studying the sources of false-positives and false-negatives in dynamic taint analysis designs and for verifying the correctness of manually defined taint propagation rules are presented. These definitions and methods were successfully used to analyze and compare previously published taint analysis platforms in terms of false-positives and false-negatives

    Concurrent Checkpointing for Embedded Real-Time Systems

    Get PDF
    abstract: The Internet of Things ecosystem has spawned a wide variety of embedded real-time systems that complicate the identification and resolution of bugs in software. The methods of concurrent checkpoint provide a means to monitor the application state with the ability to replay the execution on like hardware and software, without holding off and delaying the execution of application threads. In this thesis, it is accomplished by monitoring physical memory of the application using a soft-dirty page tracker and measuring the various types of overhead when employing concurrent checkpointing. The solution presented is an advancement of the Checkpoint and Replay In Userspace (CRIU) thereby eliminating the large stalls and parasitic operation for each successive checkpoint. Impact and performance is measured using the Parsec 3.0 Benchmark suite and 4.11.12-rt16+ Linux kernel on a MinnowBoard Turbot Quad-Core board.Dissertation/ThesisMasters Thesis Computer Engineering 201

    一种基于宿主机/目标机架构的追踪/重演方法

    Get PDF
    实时嵌入式系统固有的不确定性使得系统运行具有不可重现性,从而造成系统调试与测试时故障可能无法重现。提出一种基于宿主机/目标机架构的追踪/重演方法来解决实时嵌入式系统运行的不可重现性问题。该方法通过插装探针来追踪系统的任务调度、任务间通信同步以及I/O操作等信息,并自动将系统的执行信息保存到宿主机端,然后通过任务控制模块来控制系统中的任务按照原有的先后顺序来执行,从而实现实时嵌入式系统执行情况的正确回放。目前,该方法已在ML505开发板和uC/OS-II操作系统上进行实现,并已成功应用到IC图像拍摄系统中。通过实验分析表明,该方法能够以较小的时间和空间开销实现实时嵌入式系统运行情况的追踪和重演

    Real-TV : RTP/L3\u27s visual monitor

    Get PDF
    Today many real-world activities are being monitored and controlled by some type of micro-processor based system. Emphasis is placed on manufacturing and controlling through the use of computers. The inherent nature of these real-world applications makes them difficult to control and monitor. They must be built meeting strict timing constraints and must be predictable. We need to develop simplistic, reliable, and cost effective methods for building, evaluating, monitoring and controlling these complex systems. These tools should provide the user with system insight. At the New Jersey Institute of Technology Real-Time Computing Laboratory, we are building a system that will provide the aforementioned tools in a complete real-time environment. This paper gives the user a brief overview of the system that is currently under construction and gives a detailed look into the component called Real-TV. Real-TV is a software component that will be used as a monitoring tool to provide users with system insight. Processes and other system statistics, occurring within the simulated predictable real-time software/hardware environment, can be tracked and monitored

    Towards Real-Time, On-Board, Hardware-Supported Sensor and Software Health Management for Unmanned Aerial Systems

    Get PDF
    For unmanned aerial systems (UAS) to be successfully deployed and integrated within the national airspace, it is imperative that they possess the capability to effectively complete their missions without compromising the safety of other aircraft, as well as persons and property on the ground. This necessity creates a natural requirement for UAS that can respond to uncertain environmental conditions and emergent failures in real-time, with robustness and resilience close enough to those of manned systems. We introduce a system that meets this requirement with the design of a real-time onboard system health management (SHM) capability to continuously monitor sensors, software, and hardware components. This system can detect and diagnose failures and violations of safety or performance rules during the flight of a UAS. Our approach to SHM is three-pronged, providing: (1) real-time monitoring of sensor and software signals; (2) signal analysis, preprocessing, and advanced on-the-fly temporal and Bayesian probabilistic fault diagnosis; and (3) an unobtrusive, lightweight, read-only, low-power realization using Field Programmable Gate Arrays (FPGAs) that avoids overburdening limited computing resources or costly re-certification of flight software. We call this approach rt-R2U2, a name derived from its requirements. Our implementation provides a novel approach of combining modular building blocks, integrating responsive runtime monitoring of temporal logic system safety requirements with model-based diagnosis and Bayesian network-based probabilistic analysis. We demonstrate this approach using actual flight data from the NASA Swift UAS
    corecore