10,266 research outputs found

    VADA: A transformation-based system for variable dependence analysis

    Get PDF
    Variable dependence is an analysis problem in which the aim is to determine the set of input variables that can affect the values stored in a chosen set of intermediate program variables. This paper shows the relationship between the variable dependence analysis problem and slicing and describes VADA, a system that implements variable dependence analysis. In order to cover the full range of C constructs and features, a transformation to a core language is employed Thus, the full analysis is required only for the core language, which is relatively simple. This reduces the overall effort required for dependency analysis. The transformations used need preserve only the variable dependence relation, and therefore need not be meaning preserving in the traditional sense. The paper describes how this relaxed meaning further simplifies the transformation phase of the approach. Finally, the results of an empirical study into the performance of the system are presented

    An Overview of Recent Trends in Software Testing

    Get PDF
    In the field of search based software testing, genetic algorithm based testing has received a major share of attention among researchers during the last few years. Though there are advantages for this type of testing, there also exist some practical difficulties which can make this technique less attractive for software testing industry. The potential of program slicing in testing has not been fully exploited till now and the works that have explicitly demonstrated the application of slicing in testing field are rare. Our paper aims to analyze existing techniques for software testing and to introduce an approach for software testing using program slicing technique. A systematic review of genetic algorithm based works reveals that, fitness function design, population initialization and parameter settings impact the quality of solution obtained in software testing using genetic algorithm. Based on the conclusions from the existing literature, we have probed deeper about the issues in these areas. Making an unbiased review like this may help to solve these unresolved issues in genetic algorithm based software testing. In this work, we have emphasized and has given clear directions on how slicing can be used as a potential tool for practical software testing. In addition, a set of research questions have been framed, which may be answered by reviewing the study made in this work. This may help future research in this area, leading to major breakthrough in software testing field

    A method for re-modularising legacy code

    Get PDF
    This thesis proposes a method for the re-modularisation of legacy COBOL. Legacy code often performs a number of functions that if split, would improve software maintainability. For instance, program comprehension would benefit from a reduction in the size of the code modules. The method aims to identify potential reuse candidates from the functions re-modularised, and to ensure clear interfaces are present between the new modules. Furthermore, functionality is often replicated across applications and so the re-modularisation process can also seek to reduce commonality and hence the overall amount of a company's code requiring maintenance. A 10 step method is devised which assembles a number of new and existing techniques into an approach suitable for use by staff not having significant reengineering experience. Three main approaches are used throughout the method; that is the analysis of the PERFORM structure, the analysis of the data, and the use of graphical representations. Both top-down and bottom-up strategies to program comprehension are incorporated within the method as are automatable, and user controlled processes to reuse candidate selection. Three industrial case studies are used to demonstrate and evaluate the method. The case studies range in size to gain an indication of the scalability of the method. The case studies are used to evaluate the method on a step by step basis; both strong points and deficiencies are identified, as well as potential solutions to the deficiencies. A review is also presented to assesses the three main approaches of the methods; the analysis of the PERFORM and data structures, and the use of graphical representations. The review uses the process of software evolution for its evaluation using successive versions of COBOL software. The method is retrospectively applied to the earliest version and the known changes identified from the following versions are used to evaluate the re-modularisations. Within the evaluation chapters a new link within the dominance tree is proposed as is an approach for dealing with multiple dominance trees. The results show that «ach approach provides an important contribution to the method as well as giving a useful insight (in the form of graphical representations) of the process of software evolution

    The ArgoNeuT Detector in the NuMI Low-Energy beam line at Fermilab

    Get PDF
    The ArgoNeuT liquid argon time projection chamber has collected thousands of neutrino and antineutrino events during an extended run period in the NuMI beam-line at Fermilab. This paper focuses on the main aspects of the detector layout and related technical features, including the cryogenic equipment, time projection chamber, read-out electronics, and off-line data treatment. The detector commissioning phase, physics run, and first neutrino event displays are also reported. The characterization of the main working parameters of the detector during data-taking, the ionization electron drift velocity and lifetime in liquid argon, as obtained from through-going muon data complete the present report.Comment: 43 pages, 27 figures, 5 tables - update referenc

    Modellbasiertes Regressionstesten von Varianten und Variantenversionen

    Get PDF
    The quality assurance of software product lines (SPL) achieved via testing is a crucial and challenging activity of SPL engineering. In general, the application of single-software testing techniques for SPL testing is not practical as it leads to the individual testing of a potentially vast number of variants. Testing each variant in isolation further results in redundant testing processes by means of redundant test-case executions due to the shared commonality. Existing techniques for SPL testing cope with those challenges, e.g., by identifying samples of variants to be tested. However, each variant is still tested separately without taking the explicit knowledge about the shared commonality and variability into account to reduce the overall testing effort. Furthermore, due to the increasing longevity of software systems, their development has to face software evolution. Hence, quality assurance has also to be ensured after SPL evolution by testing respective versions of variants. In this thesis, we tackle the challenges of testing redundancy as well as evolution by proposing a framework for model-based regression testing of evolving SPLs. The framework facilitates efficient incremental testing of variants and versions of variants by exploiting the commonality and reuse potential of test artifacts and test results. Our contribution is divided into three parts. First, we propose a test-modeling formalism capturing the variability and version information of evolving SPLs in an integrated fashion. The formalism builds the basis for automatic derivation of reusable test cases and for the application of change impact analysis to guide retest test selection. Second, we introduce two techniques for incremental change impact analysis to identify (1) changing execution dependencies to be retested between subsequently tested variants and versions of variants, and (2) the impact of an evolution step to the variant set in terms of modified, new and unchanged versions of variants. Third, we define a coverage-driven retest test selection based on a new retest coverage criterion that incorporates the results of the change impact analysis. The retest test selection facilitates the reduction of redundantly executed test cases during incremental testing of variants and versions of variants. The framework is prototypically implemented and evaluated by means of three evolving SPLs showing that it achieves a reduction of the overall effort for testing evolving SPLs.Testen ist ein wichtiger Bestandteil der Entwicklung von Softwareproduktlinien (SPL). Aufgrund der potentiell sehr großen Anzahl an Varianten einer SPL ist deren individueller Test im Allgemeinen nicht praktikabel und resultiert zudem in redundanten TestfallausfĂŒhrungen, die durch die Gemeinsamkeiten zwischen Varianten entstehen. Existierende SPL-TestansĂ€tze adressieren diese Herausforderungen z.B. durch die Reduktion der Anzahl an zu testenden Varianten. Jedoch wird weiterhin jede Variante unabhĂ€ngig getestet, ohne dabei das Wissen ĂŒber Gemeinsamkeiten und VariabilitĂ€t auszunutzen, um den Testaufwand zu reduzieren. Des Weiteren muss sich die SPL-Entwicklung mit der Evolution von Software auseinandersetzen. Dies birgt weitere Herausforderungen fĂŒr das SPL-Testen, da nicht nur fĂŒr Varianten sondern auch fĂŒr ihre Versionen die QualitĂ€t sichergestellt werden muss. In dieser Arbeit stellen wir ein Framework fĂŒr das modellbasierte Regressionstesten von evolvierenden SPL vor, das die Herausforderungen des redundanten Testens und der Software-Evolution adressiert. Das Framework vereint Testmodellierung, Änderungsauswirkungsanalyse und automatische Testfallselektion, um einen inkrementellen Testprozess zu definieren, der Varianten und Variantenversionen unter Ausnutzung des Wissens ĂŒber gemeinsame FunktionalitĂ€t und dem Wiederverwendungspotential von Testartefakten und -resultaten effizient testet. FĂŒr die Testmodellierung entwickeln wir einen Ansatz, der VariabilitĂ€ts- sowie Versionsinformation von evolvierenden SPL gleichermaßen fĂŒr die Modellierung einbezieht. FĂŒr die Änderungsauswirkungsanalyse definieren wir zwei Techniken, um zum einen Änderungen in AusfĂŒhrungsabhĂ€ngigkeiten zwischen zu testenden Varianten und ihren Versionen zu identifizieren und zum anderen die Auswirkungen eines Evolutionsschrittes auf die Variantenmenge zu bestimmen und zu klassifizieren. FĂŒr die Testfallselektion schlagen wir ein Abdeckungskriterium vor, das die Resultate der Auswirkungsanalyse einbezieht, um automatisierte Entscheidungen ĂŒber einen Wiederholungstest von wiederverwendbaren TestfĂ€llen durchzufĂŒhren. Die abdeckungsgetriebene Testfallselektion ermöglicht somit die Reduktion der redundanten TestfallausfĂŒhrungen wĂ€hrend des inkrementellen Testens von Varianten und Variantenversionen. Das Framework ist prototypisch implementiert und anhand von drei evolvierenden SPL evaluiert. Die Resultate zeigen, dass eine Aufwandsreduktion fĂŒr das Testen evolvierender SPL erreicht wird

    Reductie van het geheugengebruik van besturingssysteemkernen Memory Footprint Reduction for Operating System Kernels

    Get PDF
    In ingebedde systemen is er vaak maar een beperkte hoeveelheid geheugen beschikbaar. Daarom wordt er veel aandacht besteed aan het produceren van compacte programma's voor deze systemen, en zijn er allerhande technieken ontwikkeld die automatisch het geheugengebruik van programma's kunnen verkleinen. Tot nu toe richtten die technieken zich voornamelijk op de toepassingssoftware die op het systeem draait, en werd het besturingssysteem over het hoofd gezien. In dit proefschrift worden een aantal technieken beschreven die het mogelijk maken om op een geautomatiseerde manier het geheugengebruik van een besturingssysteemkern gevoelig te verkleinen. Daarbij wordt in eerste instantie gebruik gemaakt van compactietransformaties tijdens het linken. Als we de hardware en software waaruit het systeem samengesteld is kennen, is het mogelijk om nog verdere reducties te bekomen. Daartoe wordt de kern gespecialiseerd voor een bepaalde hardware-software combinatie. Overbodige functionaliteit wordt opgespoord en uit de kern verwijderd, terwijl de resterende functionaliteit wordt aangepast aan de specifieke gebruikspatronen die uit de hardware en software kunnen afgeleid worden. Als laatste worden technieken voorgesteld die het mogelijk maken om weinig of niet uitgevoerde code (bijvoorbeeld code voor het afhandelen van slechts zeldzaam optredende foutcondities) uit het geheugen te verwijderen. Deze code wordt dan enkel ingeladen op het moment dat ze effectief nodig is. Voor ons testsysteem kunnen we met de gecombineerde technieken het geheugengebruik van een Linux 2.4 kern met meer dan 48% verminderen

    A Compiler-based Framework For Automatic Extraction Of Program Skeletons For Exascale Hardware/software Co-design

    Get PDF
    The design of high-performance computing architectures requires performance analysis of largescale parallel applications to derive various parameters concerning hardware design and software development. The process of performance analysis and benchmarking an application can be done in several ways with varying degrees of fidelity. One of the most cost-effective ways is to do a coarse-grained study of large-scale parallel applications through the use of program skeletons. The concept of a “program skeleton” that we discuss in this paper is an abstracted program that is derived from a larger program where source code that is determined to be irrelevant is removed for the purposes of the skeleton. In this work, we develop a semi-automatic approach for extracting program skeletons based on compiler program analysis. We demonstrate correctness of our skeleton extraction process by comparing details from communication traces, as well as show the performance speedup of using skeletons by running simulations in the SST/macro simulator. Extracting such a program skeleton from a large-scale parallel program requires a substantial amount of manual effort and often introduces human errors. We outline a semi-automatic approach for extracting program skeletons from large-scale parallel applications that reduces cost and eliminates errors inherent in manual approaches. Our skeleton generation approach is based on the use of the extensible and open-source ROSE compiler infrastructure that allows us to perform flow and dependency analysis on larger programs in order to determine what code can be removed from the program to generate a skeleton

    Automatic Removal of Flaws in Embedded System Software

    Get PDF
    Tese de mestrado, Segurança InformĂĄtica, Universidade de Lisboa, Faculdade de CiĂȘncias, 2022Currently, embedded systems are present in a myriad of devices, such as Internet of Things, drones, and Cyber-physical Systems. The security of these devices can be critical, depending on the context they are integrated and the role they play (e.g., water plant, car). C is the core language used to develop the software for these devices and is known for missing the bounds of its data types, which leads to vulnerabilities such as buffer overflows. These vulnerabilities, when exploited, cause severe damage and can put human life in danger. Therefore, the software of these devices must be secure. One of the concerns with vulnerable C programs is to correct the code automatically, employing secure code that can remove the existing vulnerabilities and avoid attacks. However, such task faces some challenges after finding the vulnerabilities, namely determining what code is needed to remove them and where to insert that code, maintaining the correct behavior of the application after applying the code correction, and verifying that the generated code correction is secure and effectively removes the vulnerabilities. Another challenge is to accomplish all these elements automatically. This work aims to study diverse types of buffer overflow vulnerabilities in the C programming lan guage, forms to build secure code for invalidating such vulnerabilities, including functions from the C language that can be used to remove flaws. Based on this knowledge, we propose an approach that automatically, after discovering and confirming potential vulnerabilities of an application, applies code correction to fix the vulnerable code of those vulnerabilities verified and validate the new code with fuzzing/attack injection. We implemented our approach and evaluated it with a set of test cases and with real applications. The experimental results showed that the tool detected the intended vulnerabilities and generated corrections capable of removing the vulnerabilities found
    • 

    corecore