11 research outputs found

    A proposed java forward slicing approach

    Get PDF
    Many organization, programmers, and researchers need to debug, test and make maintenance for a segment of their source code to improve their system. Program slicing is one of the best techniques to do so. There are many slicing techniques available to solve such problems such as static slicing, dynamic slicing, and amorphous slicing. In our paper, we decided to develop a tool that supports many slicing techniques. Our proposed tool provides new flexible ways to process simple segments of Java code, and it generates needed slicing according to the user needs, our tool will provide the user with direct and indirect dependencies for each variable in the code segments. This tool can work under various operating systems and does not need particular environments. Thus, our tool is helpful in many aspects such as debugging, testing, education, and many other elements

    A review of slicing techniques in software engineering

    Get PDF
    Program slice is the part of program that may take the program off the path of the desired output at some point of its execution. Such point is known as the slicing criterion. This point is generally identified at a location in a given program coupled with the subset of variables of program. This process in which program slices are computed is called program slicing. Weiser was the person who gave the original definition of program slice in 1979. Since its first definition, many ideas related to the program slice have been formulated along with the numerous numbers of techniques to compute program slice. Meanwhile, distinction between the static slice and dynamic slice was also made. Program slicing is now among the most useful techniques that can fetch the particular elements of a program which are related to a particular computation. Quite a large numbers of variants for the program slicing have been analyzed along with the algorithms to compute the slice. Model based slicing split the large architectures of software into smaller sub models during early stages of SDLC. Software testing is regarded as an activity to evaluate the functionality and features of a system. It verifies whether the system is meeting the requirement or not. A common practice now is to extract the sub models out of the giant models based upon the slicing criteria. Process of model based slicing is utilized to extract the desired lump out of slice diagram. This specific survey focuses on slicing techniques in the fields of numerous programing paradigms like web applications, object oriented, and components based. Owing to the efforts of various researchers, this technique has been extended to numerous other platforms that include debugging of program, program integration and analysis, testing and maintenance of software, reengineering, and reverse engineering. This survey portrays on the role of model based slicing and various techniques that are being taken on to compute the slices

    ORBS and the limits of static slicing

    Get PDF
    Observation-based slicing is a recently-introduced, language-independent, slicing technique based on the dependencies observable from program behaviour. Due to the wellknown limits of dynamic analysis, we may only compute an under-approximation of the true observation-based slice. However, because the observation-based slice captures all possible dependence that can be observed, even such approximations can yield insight into the limitations of static slicing. For example, a static slice, S that is strictly smaller than the corresponding observation based slice is guaranteed to be unsafe. We present the results of three sets of experiments on 12 different programs, including benchmarks and larger programs, which investigate the relationship between static and observation-based slicing. We show that, in extreme cases, observation-based slices can find the true static minimal slice, where static techniques cannot. For more typical cases, our results illustrate the potential for observation-based slicing to highlight unsafe static slices. Finally, we report on the sensitivity of observation-based slicing to test quality

    Tree-oriented vs. line-oriented Observation-Based Slicing

    Get PDF
    Observation-based slicing is a recently-introduced, language-independent slicing technique based on the dependencies observable from program behavior. The original algorithm processed traditional source code at the line-of-text level. A recent variation was developed to slice the tree-based XML representation of executable models. We ported the model slicer to source code using srcML to construct a tree-based representation of traditional source code. We present the results of a comparison of the two slicers using four experiments involving seventeen different programs, including classic benchmarks and larger production systems. The resulting slices had essentially the same size and quite often the same content. Where they differ, the use of tree structure traded an ability to remove unnecessary parts of a statement for the requirement of maintaining aspect of the code structure. Comparing the slicers finds that each has its advantages. For example, when the tree representation facilitates the deletion of large chunks of code, the tree slicer was over eight times faster. In contrast, when slicing C++ code it was over nine times slower because of the multitude of small trees created to support C++ syntax. Given the pros and cons of the two, the results suggest the value of their hybrid combination

    A comparison of tree- and line-oriented observational slicing

    Get PDF
    Observation-based slicing and its generalization observational slicing are recently-introduced, language-independent dynamic slicing techniques. They both construct slices based on the dependencies observed during program execution, rather than static or dynamic dependence analysis. The original implementation of the observation-based slicing algorithm used lines of source code as its program representation. A recent variation, developed to slice modelling languages (such as Simulink), used an XML representation of an executable model. We ported the XML slicer to source code by constructing a tree representation of traditional source code through the use of srcML. This work compares the tree- and line-based slicers using four experiments involving twenty different programs, ranging from classic benchmarks to million-line production systems. The resulting slices are essentially the same size for the majority of the programs and are often identical. However, structural constraints imposed by the tree representation sometimes force the slicer to retain enclosing control structures. It can also “bog down” trying to delete single-token subtrees. This occasionally makes the tree-based slices larger and the tree-based slicer slower than a parallelised version of the line-based slicer. In addition, a Java versus C comparison finds that the two languages lead to similar slices, but Java code takes noticeably longer to slice. The initial experiments suggest two improvements to the tree-based slicer: the addition of a size threshold, for ignoring small subtrees, and subtree replacement. The former enables the slicer to run 3.4 times faster while producing slices that are only about 9% larger. At the same time the subtree replacement reduces size by about 8–12% and allows the tree-based slicer to produce more natural slices

    srcSlice: very efficient and scalable forward static slicing

    Full text link
    A highly efficient lightweight forward static slicing approach is presented and evaluated. The approach does not compute the program/system dependence graph but instead dependence and control information is com-puted as needed while computing the slice on a variable. The result is a list of line numbers, dependent vari-ables, aliases, and function calls that are part of the slice for all variables (both local and global) for the entire system. The method is implemented as a tool, called srcSlice, on top of srcML, an XML representation of source code. The approach is highly scalable and can generate the slices for all variables of the Linux kernel in approximately 20min on a typical desktop. Benchmark results are compared with the CodeSurfer slicing tool from GrammaTech Inc., and the approach compares well with regard to accuracy of slices. Copyright

    Developer-Centric Automated Debugging

    Get PDF
    Software debugging is an expensive activity that is responsible for a significant part of software maintenance cost. In particular, locating faulty code (i.e., fault localization) is one of the most challenging parts of software debugging. In the past years, researchers have proposed many techniques that aim at fully automating the task of fault localization. Although these techniques have been shown to be effective in reducing the amount of code developers need to inspect to locate faults, there is growing evidence that they provide developers with limited help in realistic debugging scenarios. I believe that a practical automated debugging technique should have human developers at the center of the debugging process rather than trying to completely replace them. In this dissertation, I present three of my techniques that support developer-centric automated debugging. First, I present ENLIGHTEN, an interactive, feedback-driven fault localization technique. ENLIGHTEN supports and automates developers’ debugging workflow as follows. It 1) uses traditional statistical fault localization (SFL) to formulate an initial hypothesis of where the fault may be; 2) identifies a relevant subset of execution that can help support or refute the formulated hypothesis; 3) presents the developer with a query about the identified execution subset in the form of a correctness question about the input-output relation of the partial execution; 4) refines its hypothesis of the fault by using the developer’s feedback; and 5) repeats these steps until the fault is found. Second, I discuss my work on improving the accuracy of dynamic dependence analysis, which is a powerful tool for developers to investigate program behavior in an interactive debugging setting and a foundation that many automated debugging techniques leverage to model dynamic execution semantics. I present my finding that existing dynamic dependence analysis techniques could miss the cause-effect relations between faults and the observed failures if the faulty program states propagate via incorrect computation of memory addresses. To address this limitation, I define the concept of potential memory-address dependence, which explicitly represents this type of causal relations, and describe an algorithm that computes it. Third, I present TESSERACT, a technique that improves the scalability of dynamic dependency analysis in the context of interactive debugging. Many existing dependency-based debugging techniques are shown to work well on short executions, but fail to scale to larger ones. TESSERACT has the potential to address this limitation by utilizing a record-and-replay system to efficiently recreate the failing execution, break it down into small time slices, and analyze these slices in a parallelized, on-demand fashion.Ph.D

    Modellbasiertes Regressionstesten von Varianten und Variantenversionen

    Get PDF
    The quality assurance of software product lines (SPL) achieved via testing is a crucial and challenging activity of SPL engineering. In general, the application of single-software testing techniques for SPL testing is not practical as it leads to the individual testing of a potentially vast number of variants. Testing each variant in isolation further results in redundant testing processes by means of redundant test-case executions due to the shared commonality. Existing techniques for SPL testing cope with those challenges, e.g., by identifying samples of variants to be tested. However, each variant is still tested separately without taking the explicit knowledge about the shared commonality and variability into account to reduce the overall testing effort. Furthermore, due to the increasing longevity of software systems, their development has to face software evolution. Hence, quality assurance has also to be ensured after SPL evolution by testing respective versions of variants. In this thesis, we tackle the challenges of testing redundancy as well as evolution by proposing a framework for model-based regression testing of evolving SPLs. The framework facilitates efficient incremental testing of variants and versions of variants by exploiting the commonality and reuse potential of test artifacts and test results. Our contribution is divided into three parts. First, we propose a test-modeling formalism capturing the variability and version information of evolving SPLs in an integrated fashion. The formalism builds the basis for automatic derivation of reusable test cases and for the application of change impact analysis to guide retest test selection. Second, we introduce two techniques for incremental change impact analysis to identify (1) changing execution dependencies to be retested between subsequently tested variants and versions of variants, and (2) the impact of an evolution step to the variant set in terms of modified, new and unchanged versions of variants. Third, we define a coverage-driven retest test selection based on a new retest coverage criterion that incorporates the results of the change impact analysis. The retest test selection facilitates the reduction of redundantly executed test cases during incremental testing of variants and versions of variants. The framework is prototypically implemented and evaluated by means of three evolving SPLs showing that it achieves a reduction of the overall effort for testing evolving SPLs.Testen ist ein wichtiger Bestandteil der Entwicklung von Softwareproduktlinien (SPL). Aufgrund der potentiell sehr großen Anzahl an Varianten einer SPL ist deren individueller Test im Allgemeinen nicht praktikabel und resultiert zudem in redundanten Testfallausführungen, die durch die Gemeinsamkeiten zwischen Varianten entstehen. Existierende SPL-Testansätze adressieren diese Herausforderungen z.B. durch die Reduktion der Anzahl an zu testenden Varianten. Jedoch wird weiterhin jede Variante unabhängig getestet, ohne dabei das Wissen über Gemeinsamkeiten und Variabilität auszunutzen, um den Testaufwand zu reduzieren. Des Weiteren muss sich die SPL-Entwicklung mit der Evolution von Software auseinandersetzen. Dies birgt weitere Herausforderungen für das SPL-Testen, da nicht nur für Varianten sondern auch für ihre Versionen die Qualität sichergestellt werden muss. In dieser Arbeit stellen wir ein Framework für das modellbasierte Regressionstesten von evolvierenden SPL vor, das die Herausforderungen des redundanten Testens und der Software-Evolution adressiert. Das Framework vereint Testmodellierung, Änderungsauswirkungsanalyse und automatische Testfallselektion, um einen inkrementellen Testprozess zu definieren, der Varianten und Variantenversionen unter Ausnutzung des Wissens über gemeinsame Funktionalität und dem Wiederverwendungspotential von Testartefakten und -resultaten effizient testet. Für die Testmodellierung entwickeln wir einen Ansatz, der Variabilitäts- sowie Versionsinformation von evolvierenden SPL gleichermaßen für die Modellierung einbezieht. Für die Änderungsauswirkungsanalyse definieren wir zwei Techniken, um zum einen Änderungen in Ausführungsabhängigkeiten zwischen zu testenden Varianten und ihren Versionen zu identifizieren und zum anderen die Auswirkungen eines Evolutionsschrittes auf die Variantenmenge zu bestimmen und zu klassifizieren. Für die Testfallselektion schlagen wir ein Abdeckungskriterium vor, das die Resultate der Auswirkungsanalyse einbezieht, um automatisierte Entscheidungen über einen Wiederholungstest von wiederverwendbaren Testfällen durchzuführen. Die abdeckungsgetriebene Testfallselektion ermöglicht somit die Reduktion der redundanten Testfallausführungen während des inkrementellen Testens von Varianten und Variantenversionen. Das Framework ist prototypisch implementiert und anhand von drei evolvierenden SPL evaluiert. Die Resultate zeigen, dass eine Aufwandsreduktion für das Testen evolvierender SPL erreicht wird

    Incremental Slicing Based on Data-Dependences Types

    Get PDF
    Program slicing is useful for assisting with software-maintenance tasks, such as program understanding, debugging, impact analysis, and regression testing. The presence and frequent usage of pointers, in languages such as C, causes complex data dependences. To function effectively on such programs, slicing techniques must account for pointerinduced data dependences. Although many existing slicing techniques function in the presence of pointers, none of those techniques distinguishes data dependences based on their types. This paper presents a new slicing technique, in which slices are computed based on types of data dependences. This new slicing technique offers several benefits and can be exploited in different ways, such as identifying subtle data dependences for debugging purposes, computing reduced-size slices quickly for complex programs, and performing incremental slicing. In particular, this paper describes an algorithm for incremental slicing that increases the scope of a slice in steps, by incorporating different types of data dependences at each step. The paper also presents empirical results to illustrate the performance of the technique in practice. The experimental results show how the sizes of the slices grow for different small- and mediumsized subjects. Finally, the paper presents a case study that explores a possible application of the slicing technique for debugging
    corecore