7 research outputs found

    Evolutionary Search Techniques with Strong Heuristics for Multi-Objective Feature Selection in Software Product Lines

    Get PDF
    Software design is a process of trading off competing objectives. If the user objective space is rich, then we should use optimizers that can fully exploit that richness. For example, this study configures software product lines (expressed as feature models) using various search-based software engineering methods. Our main result is that as we increase the number of optimization objectives, the methods in widespread use (e.g. NSGA-II, SPEA2) perform much worse than IBEA (Indicator-Based Evolutionary Algorithm). IBEA works best since it makes most use of user preference knowledge. Hence it does better on the standard measures (hypervolume and spread) but it also generates far more products with 0 violations of domain constraints. We also present significant improvements to IBEA\u27s performance by employing three strong heuristic techniques that we call PUSH, PULL, and seeding. The PUSH technique forces the evolutionary search to respect certain rules and dependencies defined by the feature models, while the PULL technique gives higher weight to constraint satisfaction as an optimization objective and thus achieves a higher percentage of fully-compliant configurations within shorter runtimes. The seeding technique helps in guiding very large feature models to correct configurations very early in the optimization process. Our conclusion is that the methods we apply in search-based software engineering need to be carefully chosen, particularly when studying complex decision spaces with many optimization objectives. Also, we conclude that search methods must be customized to fit the problem at hand. Specifically, the evolutionary search must respect domain constraints

    Uma abordagem para integração e teste de módulos baseada em agrupamento e algoritmos de otimização multiobjetivos

    Get PDF
    Resumo: Para encontrar defeitos de comunicaçõ entre diferentes partes de um sistema é realizado o teste de integração, no qual cada módulo desenvolvido deve ser integrado e testado com os módulos já existentes. Entretanto, um módulo a ser integrado e testado, pode necessitar de recursos de outro módulo ainda em desenvolvimento, levando a necessidade de se construir um stub. Stubs são simula_c~oes de recursos essenciais para o teste mas que ainda não estão disponíveis. O stub não faz parte do sistema, então a construção de stubs implica em custo adicional. Para minimizar a necessidade de stubs e conseqüentemente reduzir o custo do projeto, várias estratégias para integrar e testar módulos foram propostas. Porém, nenhuma dessas estratégias considera uma característica presente na maioria dos sistemas, que é a modularização. Dado este fato, este trabalho propõe uma estratégia que considera agrupamentos de módulos durante o estabelecimento de ordens para a integração e teste. Esta estratégia é implementada em uma abordagem chamada MECBA-Clu, uma abordagem baseada em algoritmos de otimização multiobjetivos e diferentes medidas de acoplamento para avaliar diversos fatores que inuenciam o custo de construção de stubs. A abordagem MECBA-Clu é avaliada através da condução de um experimento com oito sistemas reais, quatro Orientados a Objetos e quatro Orientados a Aspectos, no qual os três diferentes algoritmos evolutivos multiobjetivos NSGA-II, SPEA2 e PAES foram aplicados. Os resultados apontam que o espaço de busca fica restrito a determinadas áreas em que as soluções podem ser encontradas. Além disso, de acordo com quatro indicadores de qualidade utilizados, observa-se que o algoritmo PAES obteve o melhor resultado, seguido pelo NSGA-II e por fim o SPEA2. Exemplos da utilização da abordagem também são apresentados

    A survey of search-based refactoring for software maintenance

    Get PDF
    This survey reviews published materials relating to the specific area of Search Based Software Engineering concerning software maintenance. 99 papers are selected from online databases to analyze and review the area of Search Based Software Maintenance. The literature addresses different methods to automate the software maintenance process. There are studies that analyze different software metrics, studies that experiment with multi-objective techniques and papers that propose refactoring tools for use. This survey also suggests papers from related areas of research, and introduces some of the concepts and techniques used in the area. The current state of the research is analyzed in order to assess opportunities for future research. This survey is beneficial as an introduction for any researchers aiming to work in the area of Search Based Software Maintenance and will allow them to gain an understanding of the current landscape of the research and the insights gathered. The papers reviewed as well as the refactoring tools introduced are tabulated in order to aid researchers in quickly referencing studies

    Uma hiper-heurística de seleção baseada em decomposição para estabelecer sequências de módulos para o teste de software

    Get PDF
    Orientador : Prof. Dr. Silvia Regina VergilioDissertação (mestrado) - Universidade Federal do Paraná, Setor de Ciências Exatas, Programa de Pós-Graduação em Informática. Defesa: Curitiba, 03/12/2015Inclui referências : f. 82-88Resumo: Algoritmos multiobjetivos têm sido amplamente utilizados na busca de soluções de diver-sos problemas da computação, e mais especificamente para resolver problemas de Engenharia de Software na area conhecida como SBSE (Search Based Software Engineering). Contudo, conforme são intensificadas as aplicações destes algoritmos, tem-se a dificuldade de determinar qual algoritmo ou quais operadores são os mais indicados para um dado problema. Neste cenário as hiper-heurísticas são usadas para que o processo de busca seja guiado de forma que o melhor operador para o problema seja escolhido automaticamente. Neste contexto, destaca-se a hiper-heurística chamada HITO (Hyper-heuristic for the Integration and Test Order Problem), proposta para resolver o problema de estabelecer uma sequencia de módulos para o teste de integração (ITO - Integration and Test Order problem ). Em experimentos, a HITO obteve bons resultados, no entanto, existe a dificuldade para utilizar a HITO em conjunto com algoritmos baseados em decomposto, tais como o MOEA/D e MOEA/D-DRA. Estes algoritmos tem se mostrado bastante competitivos na literatura. Tendo este fato como motivação, este trabalho introduz uma hiper-heurística chamada HITO-DA (Hyper-heuristic for the Integration and Test Order Problem using Decomposition Approach) que propõe uma adaptação na HITO para permitir seu uso com algoritmos baseados em decomposto, na busca de soluções para o problema ITO. A HITO-DA foi instanciada com a meta-heurística MOEA/D-DRA usando o algoritmo de seleção FRRMAB (Fitness Rate Rank Multi Armed Bandit), e um novo algoritmo de seleção FRRCF (Fitness Rate Rank with Choice Function), proposto neste trabalho, que combina características do FRRMAB e CF (Choice Function). No estudo empírico conduzido a HITO-DA obteve melhores resultados do que a meta-heurística MOEA/D em todos os casos, e melhor desempenho em sistemas maiores, quando comparada com a HITO.Abstract: Multi-objective algorithms have been widely applied to find solutions in several problems, more specifically to solve Software Engineering problems, in the field called SBSE (Search Based Software Engineering). However, while these applications are intensified, we find some difficulty to select the most suitable operator for a problem. In this given scenario, hyper-heuristics are used to guide the search process in order to find the most suitable operator for a given problem. In this context, we find a hyper-heuristic, called HITO (Hyper-heuristic for the Integration and Test Order problem), proposed to solve the Integration and Test Order problem (ITO). HITO obtained good results, however, to adapt HITO to work with decomposition based algorithms, such as MOEA/D and MOEA/D-DRA, is a hard task. In the literature, these algorithms have shown competitive results. Based on this motivation, this work introduces a new hyper-heuristic called HITO-DA (Hyper-heuristic for the Integration and Test Order Problem using Decomposition Approach) that adapts HITO to work with decomposition based algorithms and to solve the ITO problem. The HITO-DA was instantiated using the algorithms MOEA/D-DRA, using the selection algorithm FRRMAB (Fitness Rate Rank Multi Armed Bandit) and a new algorithm, introduced in this work, named FRRCF (Fitness Rate Rank with Choice Function). FRRCF combines characteristics of the algorithms FRRMAB and CF (Choice Function). The conducted empirical study shows that HITO-DA obtained better results than MOEA/D in all cases, and obtained better results than HITO, in bigger systems

    A multi-objective evolutionary approach for automatic generation of test cases from state machines

    Get PDF
    Orientador: Eliane MartinsTese (doutorado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: A geração automática de casos de teste contribui tanto para melhorar a produtividade quanto para reduzir esforço e custo no processo de desenvolvimento de software. Neste trabalho é proposta uma abordagem, denominada MOST (Multi-Objective Search-based Testing approach from EFSM), para gerar casos de teste a partir de Máquina de Estados Finitos Estendida (MEFE) com a aplicação de uma técnica de otimização. No teste baseado em MEFE, é necessário encontrar uma sequência de entrada para exercitar um caminho no modelo, a fim de cobrir um critério de teste (e.g. todas as transições). Como as sequências podem ter diferentes tamanhos, motivou-se o desenvolvimento do algoritmo M-GEOvsl (Multi-Objective Generalized Extremal Optimization with variable string length) que permite gerar soluções de diferentes tamanhos. Além disso, por ser um algoritmo multiobjetivo, M-GEOvsl também possibilita que mais de um critério seja usado para avaliar as soluções. Com a aplicação desse algoritmo em MOST, tanto a cobertura da transição alvo quanto o tamanho da sequência são levados em consideração na geração de casos de teste. Para guiar a busca, são utilizadas informações das dependências do modelo. O algoritmo gera as sequências de entrada, incluindo os valores de seus parâmetros. Em MOST, um modelo executável da MEFE recebe como entrada os dados gerados pelo M-GEOvsl e produz dinamicamente os caminhos percorridos. Uma vez que os aspectos de controle e dados do modelo são considerados durante a execução do modelo, evita-se o problema de geração de caminhos infactíveis. Um caminho pode ser sintaticamente possível, mas semanticamente infactível, devido aos conitos de dados envolvidos no modelo. Para avaliar a abordagem proposta foram realizados vários experimentos com modelos da literatura e de aplicações reais. Os resultados da abordagem também foram comparados com os casos de teste obtidos em um trabalho relacionado.Abstract: Automated test case generation can improve the productivity as well as reduce effort and cost in the software development process. In this work an approach, named MOST (Multi- Objective Search-based Testing approach from EFSM), is proposed to generate test cases from Extended Finite State Machine (EFSM) using an optimization technique. In EFSM based testing, an input sequence should be found to sensitize a path in the model, in order to cover a test criterion (e.g. all transitions). As the sequences can have different lengths, it motivates the development of the M-GEOvsl (Multi-Objective Generalized Extremal Optimization with variable string length) algorithm that makes possible the generation of solutions with different lengths. Moreover, as a multiobjective algorithm, M-GEOvsl also allows to use more than one criterion to evaluate the solutions. Using this algorithm in MOST, the coverage of the target transition as well as the sequence length are taken into account in the test case generation. To guide the search, the information about the model dependences is used. The algorithm generates the input sequences, including the values of their parameters. In MOST, an executable model of the EFSM receives as input the data generated by M-GEOvsl and produces the traversed paths dynamically. Since the control and data aspects are considered during model execution, the problem of infeasible path generation is avoided. A path can be syntatically possible, but semantically infeasible, due to the data conicts in the model. In order to evaluate the proposed approach, experiments were performed with models of the literature and real-world applications. The results were also compared to the test cases obtained in a related workDoutoradoCiência da ComputaçãoDoutor em Ciência da Computaçã

    Reducing Object-Oriented Testing Cost Through the Analysis of Antipatterns

    Get PDF
    RÉSUMÉ Les tests logiciels sont d’une importance capitale dans nos sociétés numériques. Le bon fonctionnement de la plupart des activités et services dépendent presqu’entièrement de la disponibilité et de la fiabilité des logiciels. Quoique coûteux, les tests logiciels demeurent le meilleur moyen pour assurer la disponibilité et la fiabilité des logiciels. Mais les caractéristiques du paradigme orienté-objet—l’un des paradigmes les plus utilisés—complexifient les activités de tests. Cette thèse est une contribution à l’effort considérable que les chercheurs ont investi ces deux décennies afin de proposer des approches et des techniques qui réduisent les coûts de test des programmes orientés-objet et aussi augmentent leur efficacité. Notre première contribution est une étude empirique qui vise à évaluer l’impact des antipatrons sur le coût des tests unitaires orienté-objet. Les antipatrons sont des mauvaises solutions à des problèmes récurrents de conception et d’implémentation. De nombreuses études empiriques ont montré l’impact négatif des antipatrons sur plusieurs attributs de qualité logicielle notamment la compréhension et la maintenance des programmes. D’autres études ont également révélé que les classes participant aux antipatrons sont plus sujettes aux changements et aux fautes. Néanmoins, aucune étude ne s’est penchée sur l’impact que ces antipatrons pourraient avoir sur les tests logiciels. Les résultats de notre étude montrent que les antipatrons ont également un effet négatif sur le coût des tests : les classes participants aux antipatrons requièrent plus de cas de test que les autres classes. De plus, bien que le test des antipatrons soit coûteux, l’étude révèle aussi que prioriser leur test contribuerait à détecter plutôt les fautes. Notre seconde contribution se base sur les résultats de la première et propose une nouvelle approche au problème d’ordre d’intégration des classes. Ce problème est l’un des principaux défis du test d’intégration des classes. Plusieurs approches ont été proposées pour résoudre ce problème mais la plupart vise uniquement à réduire le coût des stubs quand l’approche que nous proposons vise la réduction du coût des stubs et l’augmentation de la détection précoce des fautes. Pour ce faire, nous priorisons les classes ayant une grande probabilité de défectuosité, comme celles participant aux antipatrons. L’évaluation empirique des performances de notre approche a montré son habilité à trouver des compromis entre les deux objectifs. Comparée aux approches existantes, elle peut donc aider les testeurs à trouver des ordres d’intégration qui augmentent la capacité de détection précoce des fautes tout en minimisant le coût de stubs à développer. Dans notre troisième contribution, nous proposons d’analyser et améliorer l’utilisabilité de Madum, une stratégie de test unitaire spécifique à l’orienté-objet. En effet, les caractéristiques inhérentes à l’orienté-objet ont rendu insuffisants les stratégies de test traditionnelles telles que les tests boîte blanche ou boîte noire. La stratégie Madum, l’une des stratégies proposées pour pallier cette insuffisance, se présente comme une bonne candidate à l’automatisation car elle ne requiert que le code source pour identifier les cas de tests. Automatiser Madum pourrait donc contribuer à mieux tester les classes en général et celles participant aux antipatrons en particulier tout en réduisant les coûts d’un tel test. Cependant, la stratégie de test Madum ne définit pas de critères de couverture. Les critères de couverture sont un préalable à l’automatisation et aussi à la bonne utilisation de la stratégie de test. De plus, l’analyse des fondements de cette stratégie nous montre que l’un des facteurs clés du coût des tests basés sur Madum est le nombre de "transformateurs" (méthodes modifiant la valeur d’un attribut donné). Pour réduire les coûts de tests et faciliter l’utilisation de Madum, nous proposons des restructurations du code qui visent à réduire le nombre de transformateurs et aussi des critères de couverture qui guideront l’identification des données nécessaires à l’utilisation de cette stratégie de test. Ainsi, partant de la connaissance de l’impact des antipatrons sur les tests orientés-objet, nous contributions à réduire les côuts des tests unitaires et d’intégration.----------ABSTRACT Our modern society is highly computer dependent. Thus, the availability and the reliability of programs are crucial. Although expensive, software testing remains the primary means to ensure software availability and reliability. Unfortunately, the main features of the objectoriented paradigm (OO)—one of the most popular paradigms—complicate testing activities. This thesis is a contribution to the global effort to reduce OO software testing cost and to increase its reliability. Our first contribution is an empirical study to gather evidence on the impact of antipatterns on OO unit testing. Antipatterns are recurring and poor design or implementation choices. Past and recent studies showed that antipatterns negatively impact many software quality attributes, such as maintenability and understandability. Other studies also report that antipatterns are more change- and defect-prone than other classes. However, our study is the first regarding the impact of antipatterns on the cost of OO unit testing. The results show that indeed antipatterns have a negative effect on OO unit testing cost: AP classes are in general more expensive to test than other classes. They also reveal that testing AP classes in priority may be cost-effective and may allow detecting most of the defects and early. Our second contribution is a new approach to the problem of class integration test order (CITO) with the goals of minimizing the cost related to the order and increasing early defect detection. The CITO problem is one of the major problems when integrating classes in OO programs. Indeed, the order in which classes are tested during integration determines the cost (stubbing cost) but also the order on which defects are detected. Most approaches proposed to solve the CITO problem focus on minimizing the cost of stubs. In addition to this goal, our approach aims to increase early defect detection apability, which is one of the most important objectives in testing. Early defect detection allows detecting defects early and thus increases the cost-effectiveness of testing. An empirical study shows the superiority of our approach over existing approaches to provide balanced orders: orders that minimize stubbing cost while maximizing early defect detection. In our third contribution, we analyze and improve the usability of Madum testing, one of the unit testing strategies proposed to overcome the limitations of traditional testing when testing OO programs. Opposite to other OO unit testing, Madum testing requires only the source code to identify test cases. Madum testing is thus a good candidate for automation, which is one of the best ways to reduce testing cost and increase reliability. Automatizing Madum testing can help to test thoroughly AP classes while reducing the testing cost. However, Madum testing does not define coverage criteria that are a prerequisite for using the strategy and also automatically generating test data. Moreover, one of the key factors in the cost of using Madum testing is the number of transformers (methods that modify a given attribute). To reduce testing cost and increase the easiness of using Madum testing, we propose refactoring actions to reduce the number of transformers and formal coverage criteria to guide in generating Madum test data. We also formulate the problem of generating test data for Madum testing as a search-based problem. Thus, based on the evidence we gathered from the impact of antipatterns on OO testing, we reduce the cost of OO unit and integration testing
    corecore