255 research outputs found

    Test Sequence Generation for Java7 Fork/Join Using Interference Dependence

    Get PDF
    Test sequence generation through code is mainly done by using some sort of a flow graph viz. Control Flow Graph (CFG), Concurrent Control Flow Graph (CCFG), Event Graph etc. Approaches that use UML also need flow graph as an intermediate representation for final test sequence generation. In the present approach, a Flow Graph for a new concept i.e. Java7 Fork/Join is constructed and hence, by traversing the graph, test sequences are generated on the basis of all path and all node coverage criteria considering interference dependence. Further, interference dependencies are also represented in the form of a directed graph to aid the analysis of Java7 fork/join programs

    Efficiency Improvements in the Quality Assurance Process for Data Races

    Get PDF
    As the usage of concurrency in software has gained importance in the last years, and is still rising, new types of defects increasingly appeared in software. One of the most prominent and critical types of such new defect types are data races. Although research resulted in an increased effectiveness of dynamic quality assurance regarding data races, the efficiency in the quality assurance process still is a factor preventing widespread practical application. First, dynamic quality assurance techniques used for the detection of data races are inefficient. Too much effort is needed for conducting dynamic quality assurance. Second, dynamic quality assurance techniques used for the analysis of reported data races are inefficient. Too much effort is needed for analyzing reported data races and identifying issues in the source code. The goal of this thesis is to enable efficiency improvements in the process of quality assurance for data races by: (1) analyzing the representation of the dynamic behavior of a system under test. The results are used to focus instrumentation of this system, resulting in a lower runtime overhead during test execution compared to a full instrumentation of this system. (2) Analyzing characteristics and preprocessing of reported data races. The results of the preprocessing are then provided to developers and quality assurance personnel, enabling an analysis and debugging process, which is more efficient than traditional analysis of data race reports. Besides dynamic data race detection, which is complemented by the solution, all steps in the process of dynamic quality assurance for data races are discussed in this thesis. The solution for analyzing UML Activities for nodes possibly executing in parallel to other nodes or themselves is based on a formal foundation using graph theory. A major problem that has been solved in this thesis was the handling of cycles within UML Activities. This thesis provides a dynamic limit for the number of cycle traversals, based on the elements of each UML Activity to be analyzed and their semantics. Formal proofs are provided with regard to the creation of directed acyclic graphs and with regard to their analysis concerning the identification of elements that may be executed in parallel to other elements. Based on an examination of the characteristics of data races and data race reports, the results of dynamic data race detection are preprocessed and the outcome of this preprocessing is presented to users for further analysis. This thesis further provides an exemplary application of the solution idea, of the results of analyzing UML Activities, and an exemplary examination of the efficiency improvement of the dynamic data race detection, which showed a reduction in the runtime overhead of 44% when using the focused instrumentation compared to full instrumentation. Finally, a controlled experiment has been set up and conducted to examine the effects of the preprocessing of reported data races on the efficiency of analyzing data race reports. The results show that the solution presented in this thesis enables efficiency improvements in the analysis of data race reports between 190% and 660% compared to using traditional approaches. Finally, opportunities for future work are shown, which may enable a broader usage of the results of this thesis and further improvements in the efficiency of quality assurance for data races.Da die Verwendung von Concurrency in Software in den letzten Jahren an Bedeutung gewonnen hat, und immer noch gewinnt, sind zunehmend neue Arten von Fehlern in Software aufgetaucht. Eine der prominentesten und kritischsten Arten solcher neuer Fehlertypen sind data races. Auch wenn die Forschung zu einer steigenden Effektivität von Verfahren der dynamischen Qualitätssicherung geführt hat, so ist die Effizienz im Prozess der Qualitätssicherung noch immer ein Faktor, der eine weitverbreitete praktische Anwendung verhindert. Zum einen wird zu viel Aufwand benötigt, um dynamische Qualitätssicherung durchzuführen. Zum anderen sind die Verfahren zur Analyse gemeldeter data races ineffizient; es wird zu viel Aufwand benötigt, um gemeldete data races zu analysieren und Probleme im Quellcode zu identifizieren. Das Ziel dieser Dissertation ist es, Effizienzsteigerungen im Qualitätssicherungsprozess für data races zu ermöglichen, durch: (1) Analyse der Repräsentation des dynamischen Verhaltens des zu testenden Systems. Mit den Ergebnissen wird die Instrumentierung dieses Systems fokussiert, so dass ein im Vergleich zur vollen Instrumentierung des Systems geringerer Mehraufwand an Laufzeit benötigt wird. (2) Analyse der Charakteristiken von und Vorverarbeitung der gemeldeten data races. Die Ergebnisse der Vorverarbeitung werden Mitarbeitenden in der Entwicklung und Qualitätssicherung präsentiert, so dass ein Analyse- und Fehlerbehebungsprozess ermöglicht wird, welcher effizienter als traditionelle Analysen gemeldeter data races ist. Mit Ausnahme der dynamischen data race Erkennung, welche durch die Lösung komplementiert wird, werden alle Schritte im Prozess der dynamischen Qualitätssicherung für data races in dieser Dissertation behandelt. Die Lösung zur Analyse von UML Aktivitäten auf Knoten, die möglicherweise parallel zu sich selbst oder anderen Knoten ausgeführt werden, basiert auf einer formalen Grundlage aus dem Bereich der Graphentheorie. Eines der Hauptprobleme, welches gelöst wurde, war die Verarbeitung von Zyklen innerhalb der UML Aktivitäten. Diese Dissertation führt ein dynamisches Limit für die Anzahl an Zyklusdurchläufen ein, welches die Elemente jeder zu analysierenden UML Aktivität sowie deren Semantiken berücksichtigt. Ebenso werden formale Beweise präsentiert in Bezug auf die Erstellung gerichteter azyklischer Graphen, sowie deren Analyse zur Identifizierung von Elementen, die parallel zu anderen Elementen ausgeführt werden können. Auf Basis einer Untersuchung von Charakteristiken von data races sowie Meldungen von data races werden die Ergebnisse der dynamischen Erkennung von data races vorverarbeitet, und das Ergebnis der Vorverarbeitung gemeldeter data races wird Benutzern zur weiteren Analyse präsentiert. Diese Dissertation umfasst weiterhin eine exemplarische Anwendung der Lösungsidee und der Analyse von UML Aktivitäten, sowie eine exemplarische Untersuchung der Effizienzsteigerung der dynamischen Erkennung von data races. Letztere zeigte eine Reduktion des Mehraufwands an Laufzeit von 44% bei fokussierter Instrumentierung im Vergleich zu voller Instrumentierung auf. Abschließend wurde ein kontrolliertes Experiment aufgesetzt und durchgeführt, um die Effekte der Vorverarbeitung gemeldeter data races auf die Effizienz der Analyse dieser gemeldeten data races zu untersuchen. Die Ergebnisse zeigen, dass die in dieser Dissertation vorgestellte Lösung verglichen mit traditionellen Ansätzen Effizienzsteigerungen in der Analyse gemeldeter data races von 190% bis zu 660% ermöglicht. Abschließend werden Möglichkeiten für zukünftige Arbeiten vorgestellt, welche eine breitere Anwendung der Ergebnisse dieser Dissertation ebenso wie weitere Effizienzsteigerungen im Qualitätssicherungsprozess für data races ermöglichen können

    FLACOS’08 Workshop proceedings

    Get PDF
    The 2nd Workshop on Formal Languages and Analysis of Contract-Oriented Software (FLACOS’08) is held in Malta. The aim of the workshop is to bring together researchers and practitioners working on language-based solutions to contract-oriented software development. The workshop is partially funded by the Nordunet3 project “COSoDIS” (Contract-Oriented Software Development for Internet Services) and it attracted 25 participants. The program consists of 4 regular papers and 10 invited participant presentations

    SAVCBS 2004 Specification and Verification of Component-Based Systems: Workshop Proceedings

    Get PDF
    This is the proceedings of the 2004 SAVCBS workshop. The workshop is concerned with how formal (i.e., mathematical) techniques can be or should be used to establish a suitable foundation for the specification and verification of component-based systems. Component-based systems are a growing concern for the software engineering community. Specification and reasoning techniques are urgently needed to permit composition of systems from components. Component-based specification and verification is also vital for scaling advanced verification techniques such as extended static analysis and model checking to the size of real systems. The workshop considers formalization of both functional and non-functional behavior, such as performance or reliability

    Analysis and Verification of Service Contracts

    Get PDF

    Model based test suite minimization using metaheuristics

    Get PDF
    Software testing is one of the most widely used methods for quality assurance and fault detection purposes. However, it is one of the most expensive, tedious and time consuming activities in software development life cycle. Code-based and specification-based testing has been going on for almost four decades. Model-based testing (MBT) is a relatively new approach to software testing where the software models as opposed to other artifacts (i.e. source code) are used as primary source of test cases. Models are simplified representation of a software system and are cheaper to execute than the original or deployed system. The main objective of the research presented in this thesis is the development of a framework for improving the efficiency and effectiveness of test suites generated from UML models. It focuses on three activities: transformation of Activity Diagram (AD) model into Colored Petri Net (CPN) model, generation and evaluation of AD based test suite and optimization of AD based test suite. Unified Modeling Language (UML) is a de facto standard for software system analysis and design. UML models can be categorized into structural and behavioral models. AD is a behavioral type of UML model and since major revision in UML version 2.x it has a new Petri Nets like semantics. It has wide application scope including embedded, workflow and web-service systems. For this reason this thesis concentrates on AD models. Informal semantics of UML generally and AD specially is a major challenge in the development of UML based verification and validation tools. One solution to this challenge is transforming a UML model into an executable formal model. In the thesis, a three step transformation methodology is proposed for resolving ambiguities in an AD model and then transforming it into a CPN representation which is a well known formal language with extensive tool support. Test case generation is one of the most critical and labor intensive activities in testing processes. The flow oriented semantic of AD suits modeling both sequential and concurrent systems. The thesis presented a novel technique to generate test cases from AD using a stochastic algorithm. In order to determine if the generated test suite is adequate, two test suite adequacy analysis techniques based on structural coverage and mutation have been proposed. In terms of structural coverage, two separate coverage criteria are also proposed to evaluate the adequacy of the test suite from both perspectives, sequential and concurrent. Mutation analysis is a fault-based technique to determine if the test suite is adequate for detecting particular types of faults. Four categories of mutation operators are defined to seed specific faults into the mutant model. Another focus of thesis is to improve the test suite efficiency without compromising its effectiveness. One way of achieving this is identifying and removing the redundant test cases. It has been shown that the test suite minimization by removing redundant test cases is a combinatorial optimization problem. An evolutionary computation based test suite minimization technique is developed to address the test suite minimization problem and its performance is empirically compared with other well known heuristic algorithms. Additionally, statistical analysis is performed to characterize the fitness landscape of test suite minimization problems. The proposed test suite minimization solution is extended to include multi-objective minimization. As the redundancy is contextual, different criteria and their combination can significantly change the solution test suite. Therefore, the last part of the thesis describes an investigation into multi-objective test suite minimization and optimization algorithms. The proposed framework is demonstrated and evaluated using prototype tools and case study models. Empirical results have shown that the techniques developed within the framework are effective in model based test suite generation and optimizatio

    Bringing Model Checking Closer To Practical Software Engineering

    Get PDF
    Bal, H.E. [Promotor]Templon, J.A. [Copromotor]Willemse, T.A.C. [Copromotor

    Formal Object Interaction Language: Modeling and Verification of Sequential and Concurrent Object-Oriented Software

    Get PDF
    As software systems become larger and more complex, developers require the ability to model abstract concepts while ensuring consistency across the entire project. The internet has changed the nature of software by increasing the desire for software deployment across multiple distributed platforms. Finally, increased dependence on technology requires assurance that designed software will perform its intended function. This thesis introduces the Formal Object Interaction Language (FOIL). FOIL is a new object-oriented modeling language specifically designed to address the cumulative shortcomings of existing modeling techniques. FOIL graphically displays software structure, sequential and concurrent behavior, process, and interaction in a simple unified notation, and has an algebraic representation based on a derivative of the π-calculus. The thesis documents the technique in which FOIL software models can be mathematically verified to anticipate deadlocks, ensure consistency, and determine object state reachability. Scalability is offered through the concept of behavioral inheritance; and, FOIL’s inherent support for modeling concurrent behavior and all known workflow patterns is demonstrated. The concepts of process achievability, process complete achievability, and process determinism are introduced with an algorithm for simulating the execution of a FOIL object model using a FOIL process model. Finally, a technique for using a FOIL process model as a constraint on FOIL object system execution is offered as a method to ensure that object-oriented systems modeled in FOIL will complete their processes based activities. FOIL’s capabilities are compared and contrasted with an extensive array of current software modeling techniques. FOIL is ideally suited for data-aware, behavior based systems such as interactive or process management software
    • …
    corecore