4 research outputs found

    Uma hiper-heurística de seleção baseada em decomposição para estabelecer sequências de módulos para o teste de software

    Get PDF
    Orientador : Prof. Dr. Silvia Regina VergilioDissertação (mestrado) - Universidade Federal do Paraná, Setor de Ciências Exatas, Programa de Pós-Graduação em Informática. Defesa: Curitiba, 03/12/2015Inclui referências : f. 82-88Resumo: Algoritmos multiobjetivos têm sido amplamente utilizados na busca de soluções de diver-sos problemas da computação, e mais especificamente para resolver problemas de Engenharia de Software na area conhecida como SBSE (Search Based Software Engineering). Contudo, conforme são intensificadas as aplicações destes algoritmos, tem-se a dificuldade de determinar qual algoritmo ou quais operadores são os mais indicados para um dado problema. Neste cenário as hiper-heurísticas são usadas para que o processo de busca seja guiado de forma que o melhor operador para o problema seja escolhido automaticamente. Neste contexto, destaca-se a hiper-heurística chamada HITO (Hyper-heuristic for the Integration and Test Order Problem), proposta para resolver o problema de estabelecer uma sequencia de módulos para o teste de integração (ITO - Integration and Test Order problem ). Em experimentos, a HITO obteve bons resultados, no entanto, existe a dificuldade para utilizar a HITO em conjunto com algoritmos baseados em decomposto, tais como o MOEA/D e MOEA/D-DRA. Estes algoritmos tem se mostrado bastante competitivos na literatura. Tendo este fato como motivação, este trabalho introduz uma hiper-heurística chamada HITO-DA (Hyper-heuristic for the Integration and Test Order Problem using Decomposition Approach) que propõe uma adaptação na HITO para permitir seu uso com algoritmos baseados em decomposto, na busca de soluções para o problema ITO. A HITO-DA foi instanciada com a meta-heurística MOEA/D-DRA usando o algoritmo de seleção FRRMAB (Fitness Rate Rank Multi Armed Bandit), e um novo algoritmo de seleção FRRCF (Fitness Rate Rank with Choice Function), proposto neste trabalho, que combina características do FRRMAB e CF (Choice Function). No estudo empírico conduzido a HITO-DA obteve melhores resultados do que a meta-heurística MOEA/D em todos os casos, e melhor desempenho em sistemas maiores, quando comparada com a HITO.Abstract: Multi-objective algorithms have been widely applied to find solutions in several problems, more specifically to solve Software Engineering problems, in the field called SBSE (Search Based Software Engineering). However, while these applications are intensified, we find some difficulty to select the most suitable operator for a problem. In this given scenario, hyper-heuristics are used to guide the search process in order to find the most suitable operator for a given problem. In this context, we find a hyper-heuristic, called HITO (Hyper-heuristic for the Integration and Test Order problem), proposed to solve the Integration and Test Order problem (ITO). HITO obtained good results, however, to adapt HITO to work with decomposition based algorithms, such as MOEA/D and MOEA/D-DRA, is a hard task. In the literature, these algorithms have shown competitive results. Based on this motivation, this work introduces a new hyper-heuristic called HITO-DA (Hyper-heuristic for the Integration and Test Order Problem using Decomposition Approach) that adapts HITO to work with decomposition based algorithms and to solve the ITO problem. The HITO-DA was instantiated using the algorithms MOEA/D-DRA, using the selection algorithm FRRMAB (Fitness Rate Rank Multi Armed Bandit) and a new algorithm, introduced in this work, named FRRCF (Fitness Rate Rank with Choice Function). FRRCF combines characteristics of the algorithms FRRMAB and CF (Choice Function). The conducted empirical study shows that HITO-DA obtained better results than MOEA/D in all cases, and obtained better results than HITO, in bigger systems

    Uma abordagem para integração e teste de módulos baseada em agrupamento e algoritmos de otimização multiobjetivos

    Get PDF
    Resumo: Para encontrar defeitos de comunicaçõ entre diferentes partes de um sistema é realizado o teste de integração, no qual cada módulo desenvolvido deve ser integrado e testado com os módulos já existentes. Entretanto, um módulo a ser integrado e testado, pode necessitar de recursos de outro módulo ainda em desenvolvimento, levando a necessidade de se construir um stub. Stubs são simula_c~oes de recursos essenciais para o teste mas que ainda não estão disponíveis. O stub não faz parte do sistema, então a construção de stubs implica em custo adicional. Para minimizar a necessidade de stubs e conseqüentemente reduzir o custo do projeto, várias estratégias para integrar e testar módulos foram propostas. Porém, nenhuma dessas estratégias considera uma característica presente na maioria dos sistemas, que é a modularização. Dado este fato, este trabalho propõe uma estratégia que considera agrupamentos de módulos durante o estabelecimento de ordens para a integração e teste. Esta estratégia é implementada em uma abordagem chamada MECBA-Clu, uma abordagem baseada em algoritmos de otimização multiobjetivos e diferentes medidas de acoplamento para avaliar diversos fatores que inuenciam o custo de construção de stubs. A abordagem MECBA-Clu é avaliada através da condução de um experimento com oito sistemas reais, quatro Orientados a Objetos e quatro Orientados a Aspectos, no qual os três diferentes algoritmos evolutivos multiobjetivos NSGA-II, SPEA2 e PAES foram aplicados. Os resultados apontam que o espaço de busca fica restrito a determinadas áreas em que as soluções podem ser encontradas. Além disso, de acordo com quatro indicadores de qualidade utilizados, observa-se que o algoritmo PAES obteve o melhor resultado, seguido pelo NSGA-II e por fim o SPEA2. Exemplos da utilização da abordagem também são apresentados

    Reducing Object-Oriented Testing Cost Through the Analysis of Antipatterns

    Get PDF
    RÉSUMÉ Les tests logiciels sont d’une importance capitale dans nos sociétés numériques. Le bon fonctionnement de la plupart des activités et services dépendent presqu’entièrement de la disponibilité et de la fiabilité des logiciels. Quoique coûteux, les tests logiciels demeurent le meilleur moyen pour assurer la disponibilité et la fiabilité des logiciels. Mais les caractéristiques du paradigme orienté-objet—l’un des paradigmes les plus utilisés—complexifient les activités de tests. Cette thèse est une contribution à l’effort considérable que les chercheurs ont investi ces deux décennies afin de proposer des approches et des techniques qui réduisent les coûts de test des programmes orientés-objet et aussi augmentent leur efficacité. Notre première contribution est une étude empirique qui vise à évaluer l’impact des antipatrons sur le coût des tests unitaires orienté-objet. Les antipatrons sont des mauvaises solutions à des problèmes récurrents de conception et d’implémentation. De nombreuses études empiriques ont montré l’impact négatif des antipatrons sur plusieurs attributs de qualité logicielle notamment la compréhension et la maintenance des programmes. D’autres études ont également révélé que les classes participant aux antipatrons sont plus sujettes aux changements et aux fautes. Néanmoins, aucune étude ne s’est penchée sur l’impact que ces antipatrons pourraient avoir sur les tests logiciels. Les résultats de notre étude montrent que les antipatrons ont également un effet négatif sur le coût des tests : les classes participants aux antipatrons requièrent plus de cas de test que les autres classes. De plus, bien que le test des antipatrons soit coûteux, l’étude révèle aussi que prioriser leur test contribuerait à détecter plutôt les fautes. Notre seconde contribution se base sur les résultats de la première et propose une nouvelle approche au problème d’ordre d’intégration des classes. Ce problème est l’un des principaux défis du test d’intégration des classes. Plusieurs approches ont été proposées pour résoudre ce problème mais la plupart vise uniquement à réduire le coût des stubs quand l’approche que nous proposons vise la réduction du coût des stubs et l’augmentation de la détection précoce des fautes. Pour ce faire, nous priorisons les classes ayant une grande probabilité de défectuosité, comme celles participant aux antipatrons. L’évaluation empirique des performances de notre approche a montré son habilité à trouver des compromis entre les deux objectifs. Comparée aux approches existantes, elle peut donc aider les testeurs à trouver des ordres d’intégration qui augmentent la capacité de détection précoce des fautes tout en minimisant le coût de stubs à développer. Dans notre troisième contribution, nous proposons d’analyser et améliorer l’utilisabilité de Madum, une stratégie de test unitaire spécifique à l’orienté-objet. En effet, les caractéristiques inhérentes à l’orienté-objet ont rendu insuffisants les stratégies de test traditionnelles telles que les tests boîte blanche ou boîte noire. La stratégie Madum, l’une des stratégies proposées pour pallier cette insuffisance, se présente comme une bonne candidate à l’automatisation car elle ne requiert que le code source pour identifier les cas de tests. Automatiser Madum pourrait donc contribuer à mieux tester les classes en général et celles participant aux antipatrons en particulier tout en réduisant les coûts d’un tel test. Cependant, la stratégie de test Madum ne définit pas de critères de couverture. Les critères de couverture sont un préalable à l’automatisation et aussi à la bonne utilisation de la stratégie de test. De plus, l’analyse des fondements de cette stratégie nous montre que l’un des facteurs clés du coût des tests basés sur Madum est le nombre de "transformateurs" (méthodes modifiant la valeur d’un attribut donné). Pour réduire les coûts de tests et faciliter l’utilisation de Madum, nous proposons des restructurations du code qui visent à réduire le nombre de transformateurs et aussi des critères de couverture qui guideront l’identification des données nécessaires à l’utilisation de cette stratégie de test. Ainsi, partant de la connaissance de l’impact des antipatrons sur les tests orientés-objet, nous contributions à réduire les côuts des tests unitaires et d’intégration.----------ABSTRACT Our modern society is highly computer dependent. Thus, the availability and the reliability of programs are crucial. Although expensive, software testing remains the primary means to ensure software availability and reliability. Unfortunately, the main features of the objectoriented paradigm (OO)—one of the most popular paradigms—complicate testing activities. This thesis is a contribution to the global effort to reduce OO software testing cost and to increase its reliability. Our first contribution is an empirical study to gather evidence on the impact of antipatterns on OO unit testing. Antipatterns are recurring and poor design or implementation choices. Past and recent studies showed that antipatterns negatively impact many software quality attributes, such as maintenability and understandability. Other studies also report that antipatterns are more change- and defect-prone than other classes. However, our study is the first regarding the impact of antipatterns on the cost of OO unit testing. The results show that indeed antipatterns have a negative effect on OO unit testing cost: AP classes are in general more expensive to test than other classes. They also reveal that testing AP classes in priority may be cost-effective and may allow detecting most of the defects and early. Our second contribution is a new approach to the problem of class integration test order (CITO) with the goals of minimizing the cost related to the order and increasing early defect detection. The CITO problem is one of the major problems when integrating classes in OO programs. Indeed, the order in which classes are tested during integration determines the cost (stubbing cost) but also the order on which defects are detected. Most approaches proposed to solve the CITO problem focus on minimizing the cost of stubs. In addition to this goal, our approach aims to increase early defect detection apability, which is one of the most important objectives in testing. Early defect detection allows detecting defects early and thus increases the cost-effectiveness of testing. An empirical study shows the superiority of our approach over existing approaches to provide balanced orders: orders that minimize stubbing cost while maximizing early defect detection. In our third contribution, we analyze and improve the usability of Madum testing, one of the unit testing strategies proposed to overcome the limitations of traditional testing when testing OO programs. Opposite to other OO unit testing, Madum testing requires only the source code to identify test cases. Madum testing is thus a good candidate for automation, which is one of the best ways to reduce testing cost and increase reliability. Automatizing Madum testing can help to test thoroughly AP classes while reducing the testing cost. However, Madum testing does not define coverage criteria that are a prerequisite for using the strategy and also automatically generating test data. Moreover, one of the key factors in the cost of using Madum testing is the number of transformers (methods that modify a given attribute). To reduce testing cost and increase the easiness of using Madum testing, we propose refactoring actions to reduce the number of transformers and formal coverage criteria to guide in generating Madum test data. We also formulate the problem of generating test data for Madum testing as a search-based problem. Thus, based on the evidence we gathered from the impact of antipatterns on OO testing, we reduce the cost of OO unit and integration testing
    corecore