5 research outputs found

    Profiling, extracting, and analyzing dynamic software metrics

    Get PDF
    This thesis presents a methodology for the analysis of software executions aimed at profiling software, extracting dynamic software metrics, and then analyzing those metrics with the goal of assisting software quality researchers. The methodology is implemented in a toolkit which consists of an event-based profiler which collects more accurate data than existing profilers, and a program called MetricView that derives and extracts dynamic metrics from the generated profiles. The toolkit was designed to be modular and flexible, allowing analysts and developers to easily extend its functionality to derive new or custom dynamic software metrics. We demonstrate the effectiveness and usefulness of DynaMEAT by applying it to several open-source projects of varying sizes

    Reducing Instrumentation Overhead when Reverse-Engineering Object Interactions

    Get PDF
    Reverse-engineering object interactions from source code can be done through static, dynamic, or hybrid (static plus dynamic) analyses. In the latter two, monitoring a program and collecting runtime information translates into some overhead during program execution. Depending on the type of application, the imposed overhead can reduce the precision and accuracy of the reverse-engineered object interactions (the larger the overhead the less precise or accurate the reverse-engineered interactions), to such an extent that the reverse-engineered interactions may not be correct, especially when reverse-engineering a multithreaded software system. One is therefore seeking an instrumentation strategy as less intrusive as possible. In our past work, we showed that a hybrid approach is one step towards such a solution, compared to a purely dynamic approach, and that there is room for improvements. In this paper, we uncover, in a systematic way, other aspects of the dynamic analysis that can be improved to further reduce runtime overhead, and study alternative solutions. Our experiments show effective overhead reduction thanks to a modified procedure to collect runtime information

    Analyzing the Combined Effects of Measurement Error and Perturbation Error on Performance Measurement

    Get PDF
    Dynamic performance analysis of executing programs commonly relies on statistical profiling techniques to provide performance measurement results. When a program execution is sampled we learn something about the examined program, but also change, to some extent, the program's interaction with the underlying system and thus its behavior. The amount we learn diminishes (statistically) with each sample taken, while the change we affect with the intrusive sampling risks growing larger. Effectively sampling programs is challenging largely because of the opposing effects of the decreasing sampling error and increasing perturbation error. Achieving the highest overall level of confidence in measurement results requires striking an appropriate balance between the tensions inherent in these two types of errors. Despite the popularity of statistical profiling, published material typically only explains in general qualitative terms the motivation of the systematic sampling rates used. Given the importance of sampling, we argue in favor of the general principle of deliberate sample size selection and have developed and tested a technique for doing so. We present our idea of sample rate selection based on abstract and mathematical performance measurement models we developed that incorporate the effect of sampling on both measurement accuracy and perturbation effects. Our mathematical model predicts the sampling size at which the combination of the residual measurement error and the accumulating perturbation error is minimized. Our evaluation of the model with simulation, calibration programs, and selected programs from the SPEC CPU 2006 and SPEC OMP 2001 benchmark suites indicates that this idea has promise. Our results show that the predicted sample size is generally close to the best sampling rate and effectively avoids bad choices. Most importantly, adaptive sample rate selection is shown to perform better than a single selected rate in most cases

    Parallel, Cross-Platform Unit Testing for Real-Time Embedded Systems

    Get PDF
    Embedded systems are used in a wide variety of applications (e.g., automotive, agricultural, home security, industrial, medical, military, and aerospace) due to their small size, low-energy consumption, and the ability to control real-time peripheral devices precisely. These systems, however, are different from each other in many aspects: processors, memory size, develop applications/OS, hardware interfaces, and software loading methods. Unit testing is a fundamental part of software development and the lowest level of software testing, as it tests individual or groups of functions, methods, and classes, to increase confidence that the developed software satisfies both software specifications and user requirements. Although hundreds of unit testing frameworks exist, none of them address the diverse properties of real-time embedded platforms. This inspires us to introduce XEUnit, a cross-platform unit testing framework for real-time embedded systems. XEUnit provides scalability to the framework by supporting parallel execution on multiple embedded platforms simultaneously. To address the time constraints in real-time embedded systems, we evaluate the impact of runtime overhead from traditional instrumentation through a case study of time-sensitive algorithms. Then, we introduce iterative instrumentation, which is a code coverage technique without runtime overhead, along with a case study demonstrating the effectiveness of this technique. Although iterative instrumentation can measure code coverage effectively in time-sensitive applications, the total execution cost of this approach is much higher than traditional instrumentation due to the execution of multiple variants of the system under test. This leads to scalability and performance issues especially in large applications. To solve these issues, there are two approaches we use: reducing the number of variants and executing them simultaneously. To reduce the number of variants, we present cluster iterative instrumentation, a graph clustering technique that can reduce the number of nodes in a control flow graph resulting in lower execution time. We also provide a case study of node coverage of control software to show the efficiency of cluster iterative instrumentation compared to iterative instrumentation. In addition to reducing the number of variants, the other method is to execute multiple variants at the same time. Because all executions are independent from each other, we can use parallel execution on multiple embedded platforms. Thus, we design and implement a parallel unit testing framework for real-time embedded system along with a case study comparing the execution times from different numbers of embedded platforms (executing nodes). Our final contribution is a cross-platform unit testing framework using the concepts of runtime adapters and a runtime protocol that enables testers to run code across different embedded platforms. We also demonstrate the effectiveness of this framework by testing black-box test cases on seven different embedded platforms. Overall, our results indicate that cluster iterative instrumentation with parallel unit testing can address the scalability and performance issues, and the case studies demonstrate that XEUnit can effectively test the same code on a variety of embedded platforms

    Source level debugging of dynamically translated programs

    Get PDF
    The capability to debug a program at the source level is useful and often indispensable. Debuggers usesophisticated techniques to provide a source view of a program, even though what is executing on the hard-ware is machine code. Debugging techniques evolve with significant changes in programming languagesand execution environments. Recently, software dynamic translation (SDT) has emerged as a new execu-tion mechanism. SDT inserts a run-time software layer between the program and the host machine, provid-ing flexibility in execution and program monitoring. Increasingly popular technologies that use thismechanism include dynamic optimization, dynamic instrumentation, security checking, binary translation,and host machine virtualization. However, the run-time program modifications in a SDT environment posesignificant challenges to a source level debugger. Currently debugging techniques do not exist for softwaredynamic translators. This thesis is the first to provide techniques for source level debugging of dynamically translatedprograms. The thesis proposes a novel debugging framework, called Tdb, that addresses the difficult chal-lenge of maintaining and providing source level information for programs whose binary code changes asthe program executes. The proposed framework has a number of important features. First, it does notrequire or induce changes in the program being debugged. In other words, programs are debugged is theirdeployment environment. Second, the framework is portable and can be applied to virtually any SDT sys-tem. The framework requires minimal changes to an SDT implementation, usually just a few lines of code.Third, the framework can be integrated with existing debuggers, such as Gdb, and does not require changesto these debuggers. This improves usability and adoption, eliminating the learning curve associated with anew debugging environment. Finally, the proposed techniques are efficient. The runtime overhead of thedebugged programs is low and comparable to that of existing debuggers. Tdb's techniques have been implemented for three different dynamic translators, on two differenthardware platforms. The experimental results demonstrate that source level debugging of dynamicallytranslated programs is feasible, and our implemented systems are portable, usable, and efficient
    corecore