6 research outputs found

    A Systematic Approach to the Test of Combined HW/SW Systems

    Get PDF
    Abstract-Today we can identify a big gap between requirement specification and the generation of test environments. This article extends the Classification Tree Method for Embedded Systems (CTM/ES) to fill this gap by new concepts for the precise specification of stimuli for operational ranges of continuous control systems. It introduces novel means for continuous acceptance criteria definition and for functional coverage definition

    Eine Technologie für das durchgängige und automatisierte Testen eingebetteter Software

    Get PDF
    Eingebettete Systeme unterliegen Anforderungen, die den Test der Software erforderlich machen. Zur Automatisierung werden dabei verschiedene Technologien eingesetzt: Für den Test nah am Quelltext sind spezielle Unit-Test Werkzeuge erforderlich, der Test von Steuer- und Regelfunktionen erfordert Model-, Software- und Hardware-in-the-Loop Werkzeuge. In dieser Arbeit wird deren Integrierbarkeit untersucht. Der Fokus liegt auf einer neuen Programmiersprache zur Implementierung von Testfällen

    Model Based Test Generation and Optimization

    Get PDF
    Abstract Model Based Test Generation and Optimization Mohamed Mussa A. Mussa, Ph.D. Concordia University, 2015 Software testing is an essential activity in the software engineering process. It is used to enhance the quality of the software products throughout the software development process. It inspects different aspects of the software quality such as correctness, performance and usability. Furthermore, software testing consumes about 50% of the software development efforts. Software products go through several testing levels. The main ones are unit-level testing, component-level testing, integration-level testing, system-level testing and acceptance-level testing. Each testing level involves a sequence of tasks such as planning, modeling, execution and evaluation. Plenty of systematic test generation approaches have been developed using different languages and notations. The majority of these approaches target a specific testing-level. However, only little effort has been directed toward systematic transition among testing-levels. Considering the incompatibility between these approaches, tailored compatibility-tools are required between the testing levels. Furthermore, several test models are usually generated to evaluate the implementation at each testing level. Unfortunately, there is redundancy among these models. Efficient reuse of these test models represents a significant challenge. On the other hand, the growing attention to the model driven methodologies bonds the development and the testing activities. However, research is still required to link the testing levels. In this PhD thesis, we propose a model based testing framework that enables reusability and collaboration across the testing levels. In this framework, we propose test generation and test optimization approaches that at each level consider artifacts generated in preceding testing levels. More precisely, we propose an approach for the generation of integration test models starting from component test models, and another approach for the optimization of the acceptance test model using the integration test models. To conduct our research in rigorous settings, we base our framework on standard notations that are widely adopted for software development and testing, namely Unified Modeling Language (UML). In our first approach, component test cases are examined to locate and select the ones that include an interaction among the integrated components. The selected test cases are merged to generate integration test cases, which tackles the theoretical research issue of merging test cases. Furthermore, the generated test cases are mapped against each other to remove potential redundancies. For the second approach, acceptance test optimization, integration test models are compared to the acceptance test model in order to remove test cases that have already been exercised during the integration-level testing. However, not all integration test cases are suitable for the comparison. Integration test cases have to be examined to ensure that they do not include test stubs for system components. We have developed two approaches and implemented the corresponding prototypes in order to demonstrate the effectiveness of our work. The first prototype implements the integration test generation approach. It accepts component test models and generates integration test models. The second prototype implements the acceptance test optimization approach. It accepts integration test models along with the acceptance test model and generates an optimized acceptance test model

    Interactive Machine Learning for User-Innovation Toolkits – An Action Design Research approach

    Get PDF
    Machine learning offers great potential to developers and end users in the creative industries. However, to better support creative software developers' needs and empower them as machine learning users and innovators, the usability of and developer experience with machine learning tools must be considered and better understood. This thesis asks the following research questions: How can we apply a user-centred approach to the design of developer tools for rapid prototyping with Interactive Machine Learning? In what ways can we design better developer tools to accelerate and broaden innovation with machine learning? This thesis presents a three-year longitudinal action research study that I undertook within a multi-institutional consortium leading the EU H2020 -funded Innovation Action RAPID-MIX. The scope of the research presented here was the application of a user-centred approach to the design and evaluation of developer tools for rapid prototyping and product development with machine learning. This thesis presents my work in collaboration with other members of RAPID-MIX, including design and deployment of a user-centred methodology for the project, interventions for gathering requirements with RAPID-MIX consortium stakeholders and end users, and prototyping, development and evaluation of a software development toolkit for interactive machine learning. This thesis contributes with new understanding about the consequences and implications of a user-centred approach to the design and evaluation of developer tools for rapid prototyping of interactive machine learning systems. This includes 1) new understanding about the goals, needs, expectations, and challenges facing creative machine-learning non-expert developers and 2) an evaluation of the usability and design trade-offs of a toolkit for rapid prototyping with interactive machine learning. This thesis also contributes with 3) a methods framework of User-Centred Design Actions for harmonising User-Centred Design with Action Research and supporting the collaboration between action researchers and practitioners working in rapid innovation actions, and 4) recommendations for applying Action Research and User-Centred Design in similar contexts and scale

    TestML- A Test Exchange Language for Model-based Testing of Embedded Software

    No full text
    Abstract Test processes in the automotive industry are tool-intensive and affected by technologically heterogeneous test infrastructures. In the industrial practice a product has to pass tests at several levels of abstraction such as Model-in-the-Loop (MIL), Software-in-the-Loop (SIL) and Hardware-in-the-Loop (HIL) tests. Different test systems are applied for this purpose (e.g. dSPACE MTest, dSPACE Automation Desk, National Instruments Teststand) and almost each test system requests its own proprietary test description language. The exchange of tests between different test systems and the reuse of tests between different test levels is normally not possible. Efforts to integrate these heterogeneous test environments, to address test exchange in a general manner and to standardize and harmonize the existing language environment are still at the beginning and not tailored towards the requirements of the automotive domain. To keep the whole development and test process efficient and manageable, the definition of an integrated and seamless approach is required. TestML – the test exchange language we present in this article – is defined to overcome the technological obstacles (different test language syntax and semantics, different data formats and interface descriptions) that almost automatically accompany the application of heterogeneous test tools and test infrastructures. TestML supports the exchange of tests between different test notations in a heterogeneous tool environment. In this paper, we introduce the XML schema of TestML and demonstrate the efficiency of the interchange format by giving examples from the model-based development of electronic control units. Tool support is illustrated by an application with Simulink/Stateflow.
    corecore