7 research outputs found

    Spinal Test Suites for Software Product Lines

    Full text link
    A major challenge in testing software product lines is efficiency. In particular, testing a product line should take less effort than testing each and every product individually. We address this issue in the context of input-output conformance testing, which is a formal theory of model-based testing. We extend the notion of conformance testing on input-output featured transition systems with the novel concept of spinal test suites. We show how this concept dispenses with retesting the common behavior among different, but similar, products of a software product line.Comment: In Proceedings MBT 2014, arXiv:1403.704

    Uma solução de implantação auto-adaptativa para plataformas Android

    Get PDF
    Orientador: Cecília Mary Fischer RubiraDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: Os dispositivos móveis, hoje em dia, fornecem recursos semelhantes aos de um computador pessoal de uma década atrás, permitindo o desenvolvimento de aplicações complexas. Consequentemente, essas aplicações móveis podem exigir tolerar falhas em tempo de execução. No entanto, a maioria das aplicações móveis de hoje são implantados usando configurações estáticas, tornando difícil tolerar falhas durante a sua execução. Nós propomos uma infraestrutura de implantação auto-adaptativa para lidar com este problema. A nossa solução oferece um circuito autônomo que administra o modelo de configuração atual da aplicação usando um modelo de características dinâmico associado com o modelo arquitetônico da mesma. Em tempo de execução, de acordo com a seleção dinâmica de características, o modelo arquitetônico implantado na plataforma se re-configura para fornecer uma nova solução. Uma aplicação Android foi implementada utilizando a solução proposta, e durante sua execução, a disponibilidade de serviços foi alterada, de tal forma que sua configuração corrente foi dinamicamente alterada para tolerar a indisponibilidade dos serviçosAbstract: Mobile devices, nowadays, provide similar capabilities as a personal computer of a decade ago, allowing the development of complex applications. Consequently, these mobile applications may require tolerating failures at runtime. However, most of the today¿s mobile applications are deployed using static configurations, making difficult to tolerate failure during their execution. We propose an adaptive deployment infrastructure to deal with this problem. Our solution offers an autonomic loop that manages the current configuration model of the application using a dynamic feature model associated with the architectural model. During runtime, according to the dynamic feature selection, the deployed architectural model can be modified to provide a new deployment solution. An Android application was implemented using the proposed solution, and during its execution, the services availability was altered so that its current configuration was changed dynamically in order to tolerate the unavailability of servicesMestradoCiência da ComputaçãoMestre em Ciência da Computação131830/2013-9CNP

    Modellbasiertes Regressionstesten von Varianten und Variantenversionen

    Get PDF
    The quality assurance of software product lines (SPL) achieved via testing is a crucial and challenging activity of SPL engineering. In general, the application of single-software testing techniques for SPL testing is not practical as it leads to the individual testing of a potentially vast number of variants. Testing each variant in isolation further results in redundant testing processes by means of redundant test-case executions due to the shared commonality. Existing techniques for SPL testing cope with those challenges, e.g., by identifying samples of variants to be tested. However, each variant is still tested separately without taking the explicit knowledge about the shared commonality and variability into account to reduce the overall testing effort. Furthermore, due to the increasing longevity of software systems, their development has to face software evolution. Hence, quality assurance has also to be ensured after SPL evolution by testing respective versions of variants. In this thesis, we tackle the challenges of testing redundancy as well as evolution by proposing a framework for model-based regression testing of evolving SPLs. The framework facilitates efficient incremental testing of variants and versions of variants by exploiting the commonality and reuse potential of test artifacts and test results. Our contribution is divided into three parts. First, we propose a test-modeling formalism capturing the variability and version information of evolving SPLs in an integrated fashion. The formalism builds the basis for automatic derivation of reusable test cases and for the application of change impact analysis to guide retest test selection. Second, we introduce two techniques for incremental change impact analysis to identify (1) changing execution dependencies to be retested between subsequently tested variants and versions of variants, and (2) the impact of an evolution step to the variant set in terms of modified, new and unchanged versions of variants. Third, we define a coverage-driven retest test selection based on a new retest coverage criterion that incorporates the results of the change impact analysis. The retest test selection facilitates the reduction of redundantly executed test cases during incremental testing of variants and versions of variants. The framework is prototypically implemented and evaluated by means of three evolving SPLs showing that it achieves a reduction of the overall effort for testing evolving SPLs.Testen ist ein wichtiger Bestandteil der Entwicklung von Softwareproduktlinien (SPL). Aufgrund der potentiell sehr großen Anzahl an Varianten einer SPL ist deren individueller Test im Allgemeinen nicht praktikabel und resultiert zudem in redundanten Testfallausführungen, die durch die Gemeinsamkeiten zwischen Varianten entstehen. Existierende SPL-Testansätze adressieren diese Herausforderungen z.B. durch die Reduktion der Anzahl an zu testenden Varianten. Jedoch wird weiterhin jede Variante unabhängig getestet, ohne dabei das Wissen über Gemeinsamkeiten und Variabilität auszunutzen, um den Testaufwand zu reduzieren. Des Weiteren muss sich die SPL-Entwicklung mit der Evolution von Software auseinandersetzen. Dies birgt weitere Herausforderungen für das SPL-Testen, da nicht nur für Varianten sondern auch für ihre Versionen die Qualität sichergestellt werden muss. In dieser Arbeit stellen wir ein Framework für das modellbasierte Regressionstesten von evolvierenden SPL vor, das die Herausforderungen des redundanten Testens und der Software-Evolution adressiert. Das Framework vereint Testmodellierung, Änderungsauswirkungsanalyse und automatische Testfallselektion, um einen inkrementellen Testprozess zu definieren, der Varianten und Variantenversionen unter Ausnutzung des Wissens über gemeinsame Funktionalität und dem Wiederverwendungspotential von Testartefakten und -resultaten effizient testet. Für die Testmodellierung entwickeln wir einen Ansatz, der Variabilitäts- sowie Versionsinformation von evolvierenden SPL gleichermaßen für die Modellierung einbezieht. Für die Änderungsauswirkungsanalyse definieren wir zwei Techniken, um zum einen Änderungen in Ausführungsabhängigkeiten zwischen zu testenden Varianten und ihren Versionen zu identifizieren und zum anderen die Auswirkungen eines Evolutionsschrittes auf die Variantenmenge zu bestimmen und zu klassifizieren. Für die Testfallselektion schlagen wir ein Abdeckungskriterium vor, das die Resultate der Auswirkungsanalyse einbezieht, um automatisierte Entscheidungen über einen Wiederholungstest von wiederverwendbaren Testfällen durchzuführen. Die abdeckungsgetriebene Testfallselektion ermöglicht somit die Reduktion der redundanten Testfallausführungen während des inkrementellen Testens von Varianten und Variantenversionen. Das Framework ist prototypisch implementiert und anhand von drei evolvierenden SPL evaluiert. Die Resultate zeigen, dass eine Aufwandsreduktion für das Testen evolvierender SPL erreicht wird

    Supporting Change in Product Lines Within the Context of Use Case-driven Development and Testing

    Get PDF
    Product Line Engineering (PLE) is a crucial practice in many software development environments where systems are complex and developed for multiple customers with varying needs. At the same time, many business contexts are use case-driven where use cases are the main artifacts driving requirements elicitation and many other development activities. In these contexts, variability information is often not explicitly represented, which leads to ad-hoc change management for use cases, domain models and test cases in product families. In this thesis, we address the problems of modeling variability in requirements with additional traceability to feature models and the manual and error prone requirements configuration and regression testing in product families. We provide the following contributions: - A modeling method for capturing variability information in product line use case and domain models by relying exclusively on commonly used artifacts in use-case driven development, thus avoiding unnecessary modeling overhead. - An approach for automated configuration of product specific use case and domain models that guides customers in making configuration decisions and automatically generates use case diagrams, use case specifications, and domain models for configured products. - A change impact analysis approach for evolving configuration decisions in product line use case models that automatically identifies the impact of decision changes on other decisions, and incrementally reconfigures product specific use case diagrams and specifications for evolving decisions. - An approach for automated classification and prioritization of system test cases in a family of products that automatically classifies and prioritizes, for each new product, system test cases of previous product(s) in a product line, and provides guidance in modifying existing system test cases to cover new use case scenarios that have not been tested in the product line before. All our approaches have been developed and evaluated in close collaboration with our industry partner IEE

    A Survey of Model-Based Software Product Lines Testing

    No full text
    corecore