11 research outputs found

    Reducing Object-Oriented Testing Cost Through the Analysis of Antipatterns

    Get PDF
    RÉSUMÉ Les tests logiciels sont d’une importance capitale dans nos sociétés numériques. Le bon fonctionnement de la plupart des activités et services dépendent presqu’entièrement de la disponibilité et de la fiabilité des logiciels. Quoique coûteux, les tests logiciels demeurent le meilleur moyen pour assurer la disponibilité et la fiabilité des logiciels. Mais les caractéristiques du paradigme orienté-objet—l’un des paradigmes les plus utilisés—complexifient les activités de tests. Cette thèse est une contribution à l’effort considérable que les chercheurs ont investi ces deux décennies afin de proposer des approches et des techniques qui réduisent les coûts de test des programmes orientés-objet et aussi augmentent leur efficacité. Notre première contribution est une étude empirique qui vise à évaluer l’impact des antipatrons sur le coût des tests unitaires orienté-objet. Les antipatrons sont des mauvaises solutions à des problèmes récurrents de conception et d’implémentation. De nombreuses études empiriques ont montré l’impact négatif des antipatrons sur plusieurs attributs de qualité logicielle notamment la compréhension et la maintenance des programmes. D’autres études ont également révélé que les classes participant aux antipatrons sont plus sujettes aux changements et aux fautes. Néanmoins, aucune étude ne s’est penchée sur l’impact que ces antipatrons pourraient avoir sur les tests logiciels. Les résultats de notre étude montrent que les antipatrons ont également un effet négatif sur le coût des tests : les classes participants aux antipatrons requièrent plus de cas de test que les autres classes. De plus, bien que le test des antipatrons soit coûteux, l’étude révèle aussi que prioriser leur test contribuerait à détecter plutôt les fautes. Notre seconde contribution se base sur les résultats de la première et propose une nouvelle approche au problème d’ordre d’intégration des classes. Ce problème est l’un des principaux défis du test d’intégration des classes. Plusieurs approches ont été proposées pour résoudre ce problème mais la plupart vise uniquement à réduire le coût des stubs quand l’approche que nous proposons vise la réduction du coût des stubs et l’augmentation de la détection précoce des fautes. Pour ce faire, nous priorisons les classes ayant une grande probabilité de défectuosité, comme celles participant aux antipatrons. L’évaluation empirique des performances de notre approche a montré son habilité à trouver des compromis entre les deux objectifs. Comparée aux approches existantes, elle peut donc aider les testeurs à trouver des ordres d’intégration qui augmentent la capacité de détection précoce des fautes tout en minimisant le coût de stubs à développer. Dans notre troisième contribution, nous proposons d’analyser et améliorer l’utilisabilité de Madum, une stratégie de test unitaire spécifique à l’orienté-objet. En effet, les caractéristiques inhérentes à l’orienté-objet ont rendu insuffisants les stratégies de test traditionnelles telles que les tests boîte blanche ou boîte noire. La stratégie Madum, l’une des stratégies proposées pour pallier cette insuffisance, se présente comme une bonne candidate à l’automatisation car elle ne requiert que le code source pour identifier les cas de tests. Automatiser Madum pourrait donc contribuer à mieux tester les classes en général et celles participant aux antipatrons en particulier tout en réduisant les coûts d’un tel test. Cependant, la stratégie de test Madum ne définit pas de critères de couverture. Les critères de couverture sont un préalable à l’automatisation et aussi à la bonne utilisation de la stratégie de test. De plus, l’analyse des fondements de cette stratégie nous montre que l’un des facteurs clés du coût des tests basés sur Madum est le nombre de "transformateurs" (méthodes modifiant la valeur d’un attribut donné). Pour réduire les coûts de tests et faciliter l’utilisation de Madum, nous proposons des restructurations du code qui visent à réduire le nombre de transformateurs et aussi des critères de couverture qui guideront l’identification des données nécessaires à l’utilisation de cette stratégie de test. Ainsi, partant de la connaissance de l’impact des antipatrons sur les tests orientés-objet, nous contributions à réduire les côuts des tests unitaires et d’intégration.----------ABSTRACT Our modern society is highly computer dependent. Thus, the availability and the reliability of programs are crucial. Although expensive, software testing remains the primary means to ensure software availability and reliability. Unfortunately, the main features of the objectoriented paradigm (OO)—one of the most popular paradigms—complicate testing activities. This thesis is a contribution to the global effort to reduce OO software testing cost and to increase its reliability. Our first contribution is an empirical study to gather evidence on the impact of antipatterns on OO unit testing. Antipatterns are recurring and poor design or implementation choices. Past and recent studies showed that antipatterns negatively impact many software quality attributes, such as maintenability and understandability. Other studies also report that antipatterns are more change- and defect-prone than other classes. However, our study is the first regarding the impact of antipatterns on the cost of OO unit testing. The results show that indeed antipatterns have a negative effect on OO unit testing cost: AP classes are in general more expensive to test than other classes. They also reveal that testing AP classes in priority may be cost-effective and may allow detecting most of the defects and early. Our second contribution is a new approach to the problem of class integration test order (CITO) with the goals of minimizing the cost related to the order and increasing early defect detection. The CITO problem is one of the major problems when integrating classes in OO programs. Indeed, the order in which classes are tested during integration determines the cost (stubbing cost) but also the order on which defects are detected. Most approaches proposed to solve the CITO problem focus on minimizing the cost of stubs. In addition to this goal, our approach aims to increase early defect detection apability, which is one of the most important objectives in testing. Early defect detection allows detecting defects early and thus increases the cost-effectiveness of testing. An empirical study shows the superiority of our approach over existing approaches to provide balanced orders: orders that minimize stubbing cost while maximizing early defect detection. In our third contribution, we analyze and improve the usability of Madum testing, one of the unit testing strategies proposed to overcome the limitations of traditional testing when testing OO programs. Opposite to other OO unit testing, Madum testing requires only the source code to identify test cases. Madum testing is thus a good candidate for automation, which is one of the best ways to reduce testing cost and increase reliability. Automatizing Madum testing can help to test thoroughly AP classes while reducing the testing cost. However, Madum testing does not define coverage criteria that are a prerequisite for using the strategy and also automatically generating test data. Moreover, one of the key factors in the cost of using Madum testing is the number of transformers (methods that modify a given attribute). To reduce testing cost and increase the easiness of using Madum testing, we propose refactoring actions to reduce the number of transformers and formal coverage criteria to guide in generating Madum test data. We also formulate the problem of generating test data for Madum testing as a search-based problem. Thus, based on the evidence we gathered from the impact of antipatterns on OO testing, we reduce the cost of OO unit and integration testing

    Empirical studies of structural phenomena using a curated corpus of Java code

    Full text link
    Contrary to 50 years\u27 worth of advice in the instructional literature on software design, long cyclic dependencies are found to be widespread in sizeable, curated corpus of real Java software. Among their causes may be overuse of static members, underuse of dependency injection and poor tool support for avoiding them.<br /

    A Bayesian Framework for Software Regression Testing

    Get PDF
    Software maintenance reportedly accounts for much of the total cost associated with developing software. These costs occur because modifying software is a highly error-prone task. Changing software to correct faults or add new functionality can cause existing functionality to regress, introducing new faults. To avoid such defects, one can re-test software after modifications, a task commonly known as regression testing. Regression testing typically involves the re-execution of test cases developed for previous versions. Re-running all existing test cases, however, is often costly and sometimes even infeasible due to time and resource constraints. Re-running test cases that do not exercise changed or change-impacted parts of the program carries extra cost and gives no benefit. The research community has thus sought ways to optimize regression testing by lowering the cost of test re-execution while preserving its effectiveness. To this end, researchers have proposed selecting a subset of test cases according to a variety of criteria (test case selection) and reordering test cases for execution to maximize a score function (test case prioritization). This dissertation presents a novel framework for optimizing regression testing activities, based on a probabilistic view of regression testing. The proposed framework is built around predicting the probability that each test case finds faults in the regression testing phase, and optimizing the test suites accordingly. To predict such probabilities, we model regression testing using a Bayesian Network (BN), a powerful probabilistic tool for modeling uncertainty in systems. We build this model using information measured directly from the software system. Our proposed framework builds upon the existing research in this area in many ways. First, our framework incorporates different information extracted from software into one model, which helps reduce uncertainty by using more of the available information, and enables better modeling of the system. Moreover, our framework provides flexibility by enabling a choice of which sources of information to use. Research in software measurement has proven that dealing with different systems requires different techniques and hence requires such flexibility. Using the proposed framework, engineers can customize their regression testing techniques to fit the characteristics of their systems using measurements most appropriate to their environment. We evaluate the performance of our proposed BN-based framework empirically. Although the framework can help both test case selection and prioritization, we propose using it primarily as a prioritization technique. We therefore compare our technique against other prioritization techniques from the literature. Our empirical evaluation examines a variety of objects and fault types. The results show that the proposed framework can outperform other techniques on some cases and performs comparably on the others. In sum, this thesis introduces a novel Bayesian framework for optimizing regression testing and shows that the proposed framework can help testers improve the cost effectiveness of their regression testing tasks

    Combining SOA and BPM Technologies for Cross-System Process Automation

    Get PDF
    This paper summarizes the results of an industry case study that introduced a cross-system business process automation solution based on a combination of SOA and BPM standard technologies (i.e., BPMN, BPEL, WSDL). Besides discussing major weaknesses of the existing, custom-built, solution and comparing them against experiences with the developed prototype, the paper presents a course of action for transforming the current solution into the proposed solution. This includes a general approach, consisting of four distinct steps, as well as specific action items that are to be performed for every step. The discussion also covers language and tool support and challenges arising from the transformation

    Anthology

    Get PDF
    Yearbook for the academic year 2004--200

    Einheitliche GĂĽtemaĂźe fĂĽr Clusterings, Layouts und Orderings von Graphen, und deren Anwendung als Software-Entwurfskriterien

    Get PDF
    How good is a given graph clustering, graph layout, or graph ordering --specifically, how well does it group densely connected vertices and separate sparsely connected vertices? How good is a given software design -- specifically, how well does it minimize the interdependence of the subsystems? This work introduces and validates simple and uniform measures for these two properties. Together with existing optimization algorithms, the introduced measures enable the automatic computation e.g. of communities in social networks and of design flaws in software systems. The first part derives, validates, and unifies quality measures for graph clusterings, graph layouts, and graph orderings, with the following results: - Identical quality measures can be applied to clusterings, layouts, and orderings; this enables the computation of consistent clusterings, layouts, and orderings. - Diverse existing and new measures can be unified into few general measures; this facilitates their comparison and validation. - Many existing measures are biased towards certain clusterings, layouts, or orderings, even for graphs without particularly dense or sparse subgraphs, and thus do not (only) measure quality in the above sense. - For example graphs, the minimization of new, unbiased (or weakly biased) measures reveals nonobvious groups, e.g. communities in social networks, subject areas in hypertexts, or closely interlocked countries in international trade. The second part derives, validates, and unifies dependency-based indicators of software design quality. It applies two quality measures for graph clusterings as measures for the coupling of software subsystems -- specifically for the coupling indicated by common changes and for the coupling indicated by references -- and shows: - The measures quantify the dependency-caused development costs, under well-defined simplifying assumptions. - The minimization of the measures conforms to existing dependency-related design principles (like locality of change, acyclicity of references, and stability of references), design rules, and design patterns. - For example software systems, the incremental minimization of the measures reveals nonobvious design flaws, like the distribution of coherent responsibilities over several subsystems, or references from low-level to high-level subsystems. In summary, this work shows that - simple measures can suffice to capture important aspects of graph clustering quality, graph layout quality, graph ordering quality, and software design quality, and - the optimization of simple measures can suffice to detect nonobvious and often useful structure in various real-world systems.Wie gut ist ein Graph-Clustering, Graph-Layout oder Graph-Ordering -- insbesondere, wie gut gruppiert es dicht verbundene Knoten? Wie gut ist ein Software-Entwurf -- insbesondere, wie gut minimiert er die Abhängigkeiten zwischen Subsystemen? Für diese beiden Eigenschaften definiert und validiert die vorliegende Arbeit einfache und einheitliche Maße. Zusammen mit existierenden Optimierungsalgorithmen ermöglichen diese Maße die automatische Entdeckung z.B. von kohäsiven Communities in sozialen Netzwerken und von Entwurfsfehlern in Software-Systemen. Der erste Teil definiert, validiert und vereinheitlicht Gütemaße für Graph-Clusterings, Graph-Layouts und Graph-Orderings, mit folgenden Ergebnissen: - Identische Gütemaße können auf Clusterings, Layouts und Orderings angewendet werden. Dies ermöglicht die Berechnung von konsistenten Clusterings, Layouts und Orderings. - Viele existierende und neue Gütemaße können zu wenigen allgemeinen Maßen vereinheitlicht werden; dies erleichtert ihren Vergleich und ihre Validierung. - Viele existierende Maße messen nicht (nur) Güte im obigen Sinne, da sie selbst für Graphen ohne ungewöhnlich dichte oder dünne Teilgraphen bestimmte Clusterings, Layouts oder Orderings bevorzugen. - Durch Optimierung verbesserter Maße lassen sich nicht-offensichtliche Gruppen in vielen realen Systemen finden, z.B. Communities in sozialen Netzwerken, Themengebiete in Hypertexten, und Integrationsräume in der Weltwirtschaft. Der zweite Teil definiert, validiert und vereinheitlicht abhängigkeitsbasierte Indikatoren für Software-Entwurfsqualität. Er verwendet zwei Gütemaße für Graph-Clusterings als Maße für die Kopplung von Software-Subsystemen -- insbesondere für Kopplung, deren Symptom gemeinsame Änderungen sind und für Kopplung, deren Ursache Referenzen sind -- und zeigt: - Die Maße quantifizieren die durch Abhängigkeiten verursachten Entwicklungskosten, unter vereinfachenden Annahmen. - Die Optimierung der Maße impliziert anerkannte Entwurfsprinzipien (z.B. Lokalität von Änderungen, Azyklizität von Referenzen, und Stabilität von Referenzen), Entwurfsregeln und Entwurfsmuster. - Durch Optimierung der Maße lassen sich nicht-offensichtliche Entwurfsfehler finden, z.B. die Verteilung kohärenter Verantwortlichkeiten über mehrere Subsysteme, oder Referenzen von allgemeinen zu speziellen Subsystemen. Zusammenfassend zeigt die Arbeit, dass - einfache Maße ausreichen, um wichtige Aspekte der Qualität von Graph-Clusterings, Graph-Layouts, Graph-Orderings und Software-Entwürfen zu formalisieren, und - die Optimierung einfacher Maße ausreicht, um nicht-offensichtliche und nützliche Struktur in verschiedensten Systemen zu finden

    Multimedia Development of English Vocabulary Learning in Primary School

    Get PDF
    In this paper, we describe a prototype of web-based intelligent handwriting education system for autonomous learning of Bengali characters. Bengali language is used by more than 211 million people of India and Bangladesh. Due to the socio-economical limitation, all of the population does not have the chance to go to school. This research project was aimed to develop an intelligent Bengali handwriting education system. As an intelligent tutor, the system can automatically check the handwriting errors, such as stroke production errors, stroke sequence errors, stroke relationship errors and immediately provide a feedback to the students to correct themselves. Our proposed system can be accessed from smartphone or iPhone that allows students to do practice their Bengali handwriting at anytime and anywhere. Bengali is a multi-stroke input characters with extremely long cursive shaped where it has stroke order variability and stroke direction variability. Due to this structural limitation, recognition speed is a crucial issue to apply traditional online handwriting recognition algorithm for Bengali language learning. In this work, we have adopted hierarchical recognition approach to improve the recognition speed that makes our system adaptable for web-based language learning. We applied writing speed free recognition methodology together with hierarchical recognition algorithm. It ensured the learning of all aged population, especially for children and older national. The experimental results showed that our proposed hierarchical recognition algorithm can provide higher accuracy than traditional multi-stroke recognition algorithm with more writing variability

    A COMPARISON BETWEEN MOTIVATIONS AND PERSONALITY TRAITS IN RELIGIOUS TOURISTS AND CRUISE SHIP TOURISTS

    Get PDF
    The purpose of this paper is to analyze the motivations and the personality traits that characterize tourists who choose religious travels versus cruises. Participating in the research were 683 Italian tourists (345 males and 338 females, age range 18–63 years); 483 who went to a pilgrimage travel and 200 who chose a cruise ship in the Mediterranean Sea. Both groups of tourists completed the Travel Motivation Scale and the Big Five Questionnaire. Results show that different motivations and personality traits characterize the different types of tourists and, further, that motivations for traveling are predicted by specific —some similar, other divergent— personality trait
    corecore