26 research outputs found

    Ideals : evolvability of software-intensive high-tech systems : a collaborative research project on maintaining complex embedded systems

    Get PDF

    Bridging formal models : an engineering perspective

    Get PDF
    The thesis presents different techniques that can be used to build formal behavioral models. If modal properties are formulated, the models can be subjected to verification techniques to determine whether a model possesses the desired properties. However many native environments do not facilitate tools or techniques to verify them. Hence, these models need to be transformed into other models that provide suitable techniques for a formal analysis. The transformations are classified into two engineering approaches, namely syntactically engineered models and semantically engineered models. Syntactically engineered models are constructed from input specifications without explicitly considering the semantics. Semantically engineered models are constructed from input specifications by explicitly considering the semantics. The syntactic engineering approach presents four dedicated modeling techniques that construct or disseminate verification results for formal models. The first modeling technique describes a way to create models from system descriptions that specify concurrent behavior. Here, we model three variations of a 2×2 switch, for which the models are subsequently compared to models created in the specification languages: TLA+, Bluespec, Statecharts, and ACP. The comparison validates that mCRL2 is a suitable specification language to model descriptions or specify the behavior for prototype systems. The second syntactic technique constructs an mCRL2 model from a software implementation that operates a printer for printing Printed Circuit Boards. The model is used to advise (other) software engineers on dangerous language constructs in the control software. Hence, the model is model checked for various safety properties. The implementation is modeled through an over-approximation on the behavior by abstracting from program variables, such that only interface calls between processes and non-deterministic choices in procedures remain. The third modeling technique describes a language transformation from the language Chi 2.0 language to the mCRL2 language. The purpose of the transformation is to facilitate model checking techniques to the discrete part of the Chi 2.0 language

    Verification of Model Transformations

    Get PDF

    Automated specification-based testing of graphical user interfaces

    Get PDF
    Tese de doutoramento. Engenharia Electrónica e de Computadores. 2006. Faculdade de Engenharia. Universidade do Porto, Departamento de Informática, Escola de Engenharia. Universidade do Minh

    Resource-based Verification for Robust Composition of Aspects

    Get PDF
    Aspect Oriented Software Development has been proposed as a means to improve modularization of software in the presence of crosscutting concerns. Compared to object-oriented or procedural approaches, Aspect Oriented Programming (AOP) has not yet been applied in many industrial applications. In this thesis we investigate the application of AOP within an industrial context and propose a novel solution to the problem of behavioral conflicts between aspects. We report on our experience transferring an aspect-oriented solution to a company called Advanced Semi-conductor Material Lithography (ASML). We investigate the acceptance criteria for AOP in industry, based on two industrial cases studies. We present a process that includes quantification of the benefits of AOP and elicitation of key worries expressed by stakeholders. We conducted a controlled experiment to assess the advantages and disadvantages of an aspect-based approach using a tracing example. Twenty developers from ASML were requested to carry out five maintenance scenarios. This experiment has shown that, in case the tracing concern is implemented using an AOP implementation instead of a procedural language, the development effort is on average 6% reduced while the impact of errors is reduced by 77%, for maintaining code related to tracing. For a subset of the scenarios, the results were statistically significant on a confidence interval of 95%. The so-called aspect interference problem is one of the major concerns in introducing AOP. Aspects can be developed independently and behave correct in isolation. However, due to intended or unintended composition of aspects, undesired behavior can emerge. In this thesis we focus on behavioral conflicts between aspects at shared join points. These are illustrated by a realistic example based on crosscutting concerns from ASML. We present an approach for the detection of behavioral interference that is based on a novel abstraction of the behavior of aspects, using resources and operations. This abstraction enables the expression of behavior in a simple manner that is suitable for automated detection of interference among aspects. The approach employs a set of conflict detection rules that can be used to detect both generic conflicts as well as domain specific conflicts. Our approach is general for AOP languages, its application to one specific AOP language Composition Filters is also illustrated in this thesis. The application to Composition Filters demonstrates how the use of a declarative advice language can be exploited for automated conflict detection. We detail the analysis process and discuss what information is required from the aspect developer to be able perform the analysis. We also discuss when static analysis is insufficient for detecting behavioral conflicts. We present a run time extension aiming at detecting dynamic conflicts. We discuss optimizations for this run time approach, which exploits the static verification results. Finally, we propose three improvements to the Composition Filters model to support automated and manual reasoning even further. The first improvement separates what behavior is executed from when this behavior is executed. Secondly, we introduce atomic filters that can be used to build more complex filters. The semantics of these filters are well defined. Although this approach has clear benefits from an automated reasoning perspective, the introduction of atomic filters results in the definition of numerous filters for specifying more complex behavior. Therefore, we introduce a filter composition language that enables the declarative composition of (atomic) filters, such that composed filter behavior can be reused elsewhere

    A formal framework for specification-based embedded real-time system engineering

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2008.Includes bibliographical references (v. 2, p. 517-545).The increasing size and complexity of modern software-intensive systems present novel challenges when engineering high-integrity artifacts within aggressive budgetary constraints. Among these challenges, ensuring confidence in the engineered system, through validation and verification activities, represents the high cost item on many projects. The expensive nature of engineering high-integrity systems using traditional approaches can be partly attributed to the lack of analysis facilities during the early phases of the lifecycle, causing the validation and verification activities to begin too late in the engineering lifecycle. Other challenges include the management of complexity, opportunities for reuse without compromising confidence, and the ability to trace system features across lifecycle phases. The use of models as a specification mechanism provides an approach to mitigate complexity through abstraction. Furthermore, if the specification approach has formal underpinnings, the use of models can be leveraged to automate engineering activities such as formal analysis and test case generation. The research presented in this thesis proposes an engineering framework which addresses the high cost of validation and verification activities through specification-based system engineering. More specifically, the framework provides an integrated approach to embedded real-time system engineering which incorporates specification, simulation, formal verification, and test-case generation. The framework aggregates the state-of-the-art in individual software engineering disciplines to provide an end-to-end approach to embedded real-time system engineering. The key aspects of the framework include: * A novel specification language, the Timed Abstract State Machine (TASM) language, which extends the theory of Abstract State Machines (ASM).(cont.) The TASM language is a literate formal specification language which can be applied and multiple levels of abstraction and which can express the three key aspects of embedded real-time systems - function, time, and resources. * Automated verification capabilities achieved through the integration of mature analysis engines, namely the UPPAAL tool suite and the SAT4J SAT solver. The verification capabilities provided by the framework include completeness and consistency verification, model checking, execution time analysis, and resource consumption analysis. * Bi-directional traceability of model features across levels of abstraction and lifecycle phases. Traceability is achieved syntactically through archetypical refinement types; each refinement type provides correctness criteria, which, if met, guarantee semantic integrity through the refinement. * Automated test case generation capabilities for unit testing, integration testing, and regression testing. Unit test cases are generated to achieve TASM specification coverage through the rule coverage criterion. Integration test case generation is achieved through the hierarchical composition of unit test cases. Regression test case generation is achieved by leveraging the bi-directional traceability of model features. The framework is implemented into an integrated tool suite, the TASM toolset, which incorporates the UPPAAL tool suite and the SAT4J SAT solver. The toolset and framework are evaluated through experimentation on three industrial case studies - an automated manufacturing system, a "drive-by-wire" system used at a major automotive manufacturer, and a scripting environment used on the International Space Station.by Martin Ouimet.Ph.D

    Process mining : conformance and extension

    Get PDF
    Today’s business processes are realized by a complex sequence of tasks that are performed throughout an organization, often involving people from different departments and multiple IT systems. For example, an insurance company has a process to handle insurance claims for their clients, and a hospital has processes to diagnose and treat patients. Because there are many activities performed by different people throughout the organization, there is a lack of transparency about how exactly these processes are executed. However, understanding the process reality (the "as is" process) is the first necessary step to save cost, increase quality, or ensure compliance. The field of process mining aims to assist in creating process transparency by automatically analyzing processes based on existing IT data. Most processes are supported by IT systems nowadays. For example, Enterprise Resource Planning (ERP) systems such as SAP log all transaction information, and Customer Relationship Management (CRM) systems are used to keep track of all interactions with customers. Process mining techniques use these low-level log data (so-called event logs) to automatically generate process maps that visualize the process reality from different perspectives. For example, it is possible to automatically create process models that describe the causal dependencies between activities in the process. So far, process mining research has mostly focused on the discovery aspect (i.e., the extraction of models from event logs). This dissertation broadens the field of process mining to include the aspect of conformance and extension. Conformance aims at the detection of deviations from documented procedures by comparing the real process (as recorded in the event log) with an existing model that describes the assumed or intended process. Conformance is relevant for two reasons: 1. Most organizations document their processes in some form. For example, process models are created manually to understand and improve the process, comply with regulations, or for certification purposes. In the presence of existing models, it is often more important to point out the deviations from these existing models than to discover completely new models. Discrepancies emerge because business processes change, or because the models did not accurately reflect the real process in the first place (due to the manual and subjective creation of these models). If the existing models do not correspond to the actual processes, then they have little value. 2. Automatically discovered process models typically do not completely "fit" the event logs from which they were created. These discrepancies are due to noise and/or limitations of the used discovery techniques. Furthermore, in the context of complex and diverse process environments the discovered models often need to be simplified to obtain useful insights. Therefore, it is crucial to be able to check how much a discovered process model actually represents the real process. Conformance techniques can be used to quantify the representativeness of a mined model before drawing further conclusions. They thus constitute an important quality measurement to effectively use process discovery techniques in a practical setting. Once one is confident in the quality of an existing or discovered model, extension aims at the enrichment of these models by the integration of additional characteristics such as time, cost, or resource utilization. By extracting aditional information from an event log and projecting it onto an existing model, bottlenecks can be highlighted and correlations with other process perspectives can be identified. Such an integrated view on the process is needed to understand root causes for potential problems and actually make process improvements. Furthermore, extension techniques can be used to create integrated simulation models from event logs that resemble the real process more closely than manually created simulation models. In Part II of this thesis, we provide a comprehensive framework for the conformance checking of process models. First, we identify the evaluation dimensions fitness, decision/generalization, and structure as the relevant conformance dimensions.We develop several Petri-net based approaches to measure conformance in these dimensions and describe five case studies in which we successfully applied these conformance checking techniques to real and artificial examples. Furthermore, we provide a detailed literature review of related conformance measurement approaches (Chapter 4). Then, we study existing model evaluation approaches from the field of data mining. We develop three data mining-inspired evaluation approaches for discovered process models, one based on Cross Validation (CV), one based on the Minimal Description Length (MDL) principle, and one using methods based on Hidden Markov Models (HMMs). We conclude that process model evaluation faces similar yet different challenges compared to traditional data mining. Additional challenges emerge from the sequential nature of the data and the higher-level process models, which include concurrent dynamic behavior (Chapter 5). Finally, we point out current shortcomings and identify general challenges for conformance checking techniques. These challenges relate to the applicability of the conformance metric, the metric quality, and the bridging of different process modeling languages. We develop a flexible, language-independent conformance checking approach that provides a starting point to effectively address these challenges (Chapter 6). In Part III, we develop a concrete extension approach, provide a general model for process extensions, and apply our approach for the creation of simulation models. First, we develop a Petri-net based decision mining approach that aims at the discovery of decision rules at process choice points based on data attributes in the event log. While we leverage classification techniques from the data mining domain to actually infer the rules, we identify the challenges that relate to the initial formulation of the learning problem from a process perspective. We develop a simple approach to partially overcome these challenges, and we apply it in a case study (Chapter 7). Then, we develop a general model for process extensions to create integrated models including process, data, time, and resource perspective.We develop a concrete representation based on Coloured Petri-nets (CPNs) to implement and deploy this model for simulation purposes (Chapter 8). Finally, we evaluate the quality of automatically discovered simulation models in two case studies and extend our approach to allow for operational decision making by incorporating the current process state as a non-empty starting point in the simulation (Chapter 9). Chapter 10 concludes this thesis with a detailed summary of the contributions and a list of limitations and future challenges. The work presented in this dissertation is supported and accompanied by concrete implementations, which have been integrated in the ProM and ProMimport frameworks. Appendix A provides a comprehensive overview about the functionality of the developed software. The results presented in this dissertation have been presented in more than twenty peer-reviewed scientific publications, including several high-quality journals
    corecore