385 research outputs found

    Regression Testing of Object-Oriented Software based on Program Slicing

    Get PDF
    As software undergoes evolution through a series of changes, it is necessary to validate these changes through regression testing. Regression testing becomes convenient if we can identify the program parts that are likely to be affected by the changes made to the programs as part of maintenance activity. We propose a change impact analysis mechanism as an application of slicing. A new slicing method is proposed to decompose a Java program into affected packages, classes, methods and statements identified with respect to the modification made in the program. The decomposition is based on the hierarchical characteristic of Java programs. We have proposed a suitable intermediate representation for Java programs that shows all the possible dependences among the program parts. This intermediate representation is used to perform the necessary change impact analysis using our proposed slicing technique and identify the program parts that are possibly affected by the change made to the program. The packages, classes, methods, and statements thus affected are identified by traversing the intermediate graph, first in the forward direction and then in the backward direction. Based on the change impact analysis results, we propose a regression test selection approach to select a subset of the existing test suite. The proposed approach maps the decomposed slice (comprising of the affected program parts) with the coverage information of the existing test suite to select the appropriate test cases for regression testing. All the selected test cases in the new test suite are better suited for regression testing of the modified program as they execute the affected program parts and thus have a high probability of revealing the associated faults. The regression test case selection approach promises to reduce the size of regression test suite. However, sometimes the selected test suite can still appear enormous, and strict timing constraints can hinder execution of all the test cases in the reduced test suite. Hence, it is essential to minimize the test suite. In a scenario of constrained time and budget, it is difficult for the testers to know how many minimum test cases to choose and still ensure acceptable software quality. So, we introduce novel approaches to minimize the test suite as an integer linear programming problem with optimal results. Existing research on software metrics have proven cohesion metrics as good indicator of fault-proneness. But, none of these proposed metrics are based on change impact analysis. We propose a changebased cohesion measure to compute the cohesiveness of the affected program parts. These cohesion values form the minimization criteria for minimizing the test suite. We formulate an integer linear programming model based on the cohesion values to optimize the test suite and get optimal results. Software testers always face the dilemma of enhancing the possibility of fault detection. Regression test case prioritization promises to detect the faults early in the retesting process. Thus, finding an optimal order of execution of the selected regression test cases will maximize the error detection rates at less time and cost. We propose a novel approach to identify a prioritized order of test cases in a given regression selected test suite that has a high chance of fault exposing capability. It is very likely that some test cases execute some program parts that are more prone to errors and have a greater possibility of detecting more errors early during the testing process. We identify the fault-proneness of the affected program parts by finding their coupling values. We propose to compute a new coupling metric for the affected program parts, named affected change coupling, based on which the test cases are prioritized. Our analysis shows that the test cases executing the affected program parts with high affected change coupling have a higher potential of revealing faults early than other test cases in the test suite. Testing becomes convenient if we identify the changes that require rigorous retesting instead of laying equal focus to retest all the changes. Thus, next we propose an approach to save the effort and cost of retesting by identifying and quantifying the impact of crosscutting changes on other parts of the program. We propose some metrics in this regard that are useful to the testers to take early decision on what to test more and what to test less

    Prioritizing Program Elements: A Pre-testing Effort To Improve Software Quality

    Get PDF
    Test effort prioritization is a powerful technique that enables the tester to effectively utilize the test resources by streamlining the test effort. The distribution of test effort is important to test organization. We address prioritization-based testing strategies in order to do the best possible job with limited test resources. Our proposed techniques give benefit to the tester, when applied in the case of looming deadlines and limited resources. Some parts of a system are more critical and sensitive to bugs than others, and thus should be tested thoroughly. The rationale behind this thesis is to estimate the criticality of various parts within a system and prioritize the parts for testing according to their estimated criticality. We propose several prioritization techniques at different phases of Software Development Life Cycle (SDLC). Different chapters of the thesis aim at setting test priority based on various factors of the system. The purpose is to identify and focus on the critical and strategic areas and detect the important defects as early as possible, before the product release. Focusing on the critical and strategic areas helps to improve the reliability of the system within the available resources. We present code-based and architecture-based techniques to prioritize the testing tasks. In these techniques, we analyze the criticality of a component within a system using a combination of its internal and external factors. We have conducted a set of experiments on the case studies and observed that the proposed techniques are efficient and address the challenge of prioritization. We propose a novel idea of calculating the influence of a component, where in-fluence refers to the contribution or usage of the component at every execution step. This influence value serves as a metric in test effort prioritization. We first calculate the influence through static analysis of the source code and then, refine our work by calculating it through dynamic analysis. We have experimentally proved that decreasing the reliability of an element with high influence value drastically increases the failure rate of the system, which is not true in case of an element with low influence value. We estimate the criticality of a component within a system by considering its both internal and external factors such as influence value, average execution time, structural complexity, severity and business value. We prioritize the components for testing according to their estimated criticality. We have compared our approach with a related approach, in which the components were prioritized on the basis of their structural complexity only. From the experimental results, we observed that our approach helps to reduce the failure rate at the operational environment. The consequence of the observed failures were also low compared to the related approach. Priority should be established by order of importance or urgency. As the importance of a component may vary at different points of the testing phase, we propose a multi cycle-based test effort prioritization approach, in which we assign different priorities to the same component at different test cycles. Test effort prioritization at the initial phase of SDLC has a greater impact than that made at a later phase. As the analysis and design stage is critical compared to other stages, detecting and correcting errors at this stage is less costly compared to later stages of SDLC. Designing metrics at this stage help the test manager in decision making for allocating resources. We propose a technique to estimate the criticality of a use case at the design level. The criticality is computed on the basis of complexity and business value. We evaluated the complexity of a use case analytically through a set of data collected at the design level. We experimentally observed that assigning test effort to various use cases according to their estimated criticality improves the reliability of a system under test. Test effort prioritization based on risk is a powerful technique for streamlining the test effort. The tester can exploit the relationship between risk and testing effort. We proposed a technique to estimate the risk associated with various states at the component level and risk associated with use case scenarios at the system level. The estimated risks are used for enhancing the resource allocation decision. An intermediate graph called Inter-Component State-Dependence graph (ISDG) is introduced for getting the complexity for a state of a component, which is used for risk estimation. We empirically evaluated the estimated risks. We assigned test priority to the components / scenarios within a system according to their estimated risks. We performed an experimental comparative analysis and observed that the testing team guided by our technique achieved high test efficiency compared to a related approach

    TEMPOS: QoS Management Middleware for Edge Cloud Computing FaaS in the Internet of Things

    Get PDF
    Several classes of advanced Internet of Things (IoT) applications, e.g., in the industrial manufacturing domain, call for Quality of Service (QoS) management to guarantee/control performance indicators, even in presence of many sources of "stochastic noise" in real deployment environments, from scarcely available bandwidth in a time window to concurrent usage of virtualized processing resources. This paper proposes a novel IoT-oriented middleware that i) considers and coordinates together different aspects of QoS monitoring, control, and management for different kinds of virtualized resources (from networking to processing) in a holistic way, and ii) specifically targets deployment environments where edge cloud resources are employed to enable the Serverless paradigm in the cloud continuum. The reported experimental results show how it is possible to achieve the desired QoS differentiation by coordinating heterogeneous mechanisms and technologies already available in the market. This demonstrates the feasibility of effective QoS-aware management of virtualized resources in the cloud-to-things continuum when considering a Serverless provisioning scenario, which is completely original in the related literature to the best of our knowledge

    View on 5G Architecture: Version 1.0

    Get PDF
    The current white paper focuses on the produced results after one year research mainly from 16 projects working on the abovementioned domains. During several months, representatives from these projects have worked together to identify the key findings of their projects and capture the commonalities and also the different approaches and trends. Also they have worked to determine the challenges that remain to be overcome so as to meet the 5G requirements. The goal of 5G Architecture Working Group is to use the results captured in this white paper to assist the participating projects achieve a common reference framework. The work of this working group will continue during the following year so as to capture the latest results to be produced by the projects and further elaborate this reference framework. The 5G networks will be built around people and things and will natively meet the requirements of three groups of use cases: • Massive broadband (xMBB) that delivers gigabytes of bandwidth on demand • Massive machine-type communication (mMTC) that connects billions of sensors and machines • Critical machine-type communication (uMTC) that allows immediate feedback with high reliability and enables for example remote control over robots and autonomous driving. The demand for mobile broadband will continue to increase in the next years, largely driven by the need to deliver ultra-high definition video. However, 5G networks will also be the platform enabling growth in many industries, ranging from the IT industry to the automotive, manufacturing industries entertainment, etc. 5G will enable new applications like for example autonomous driving, remote control of robots and tactile applications, but these also bring a lot of challenges to the network. Some of these are related to provide low latency in the order of few milliseconds and high reliability compared to fixed lines. But the biggest challenge for 5G networks will be that the services to cater for a diverse set of services and their requirements. To achieve this, the goal for 5G networks will be to improve the flexibility in the architecture. The white paper is organized as follows. In section 2 we discuss the key business and technical requirements that drive the evolution of 4G networks into the 5G. In section 3 we provide the key points of the overall 5G architecture where as in section 4 we elaborate on the functional architecture. Different issues related to the physical deployment in the access, metro and core networks of the 5G network are discussed in section 5 while in section 6 we present software network enablers that are expected to play a significant role in the future networks. Section 7 presents potential impacts on standardization and section 8 concludes the white paper

    Comunicaciones Móviles de Misión Crítica sobre Redes LTE

    Get PDF
    Mission Critical Communications (MCC) have been typically provided by proprietary radio technologies, but, in the last years, the interest to use commercial-off-the-shelf mobile technologies has increased. In this thesis, we explore the use of LTE to support MCC. We analyse the feasibility of LTE networks employing an experimental platform, PerformNetworks. To do so, we extend the testbed to increase the number of possible scenarios and the tooling available. After exploring the Key Performance Indicators (KPIs) of LTE, we propose different architectures to support the performance and functional requirements demanded by MCC. We have identified latency as one of the KPI to improve, so we have done several proposals to reduce it. These proposals follow the Mobile Edge Computing (MEC) paradigm, locating the services in what we called the fog, close to the base station to avoid the backhaul and transport networks. Our first proposal is the Fog Gateway, which is a MEC solution fully compatible with standard LTE networks that analyses the traffic coming from the base station to decide whether it has to be routed to the fog of processed normally by the SGW. Our second proposal is its natural evolution, the GTP Gateway that requires modifications on the base station. With this proposal, the base station will only transport over GTP the traffic not going to the fog. Both proposals have been validated by providing emulated scenarios, and, in the case of the Fog Gateway, also with the implementation of different prototypes, proving its compatibility with standard LTE network and its performance. The gateways can reduce drastically the end-to-end latency, as they avoid the time consumed by the backhaul and transport networks, with a very low trade-off

    Evidence-driven testing and debugging of software systems

    Get PDF
    Program debugging is the process of testing, exposing, reproducing, diagnosing and fixing software bugs. Many techniques have been proposed to aid developers during software testing and debugging. However, researchers have found that developers hardly use or adopt the proposed techniques in software practice. Evidently, this is because there is a gap between proposed methods and the state of software practice. Most methods fail to address the actual needs of software developers. In this dissertation, we pose the following scientific question: How can we bridge the gap between software practice and the state-of-the-art automated testing and debugging techniques? To address this challenge, we put forward the following thesis: Software testing and debugging should be driven by empirical evidence collected from software practice. In particular, we posit that the feedback from software practice should shape and guide (the automation) of testing and debugging activities. In this thesis, we focus on gathering evidence from software practice by conducting several empirical studies on software testing and debugging activities in the real-world. We then build tools and methods that are well-grounded and driven by the empirical evidence obtained from these experiments. Firstly, we conduct an empirical study on the state of debugging in practice using a survey and a human study. In this study, we ask developers about their debugging needs and observe the tools and strategies employed by developers while testing, diagnosing and repairing real bugs. Secondly, we evaluate the effectiveness of the state-of-the-art automated fault localization (AFL) methods on real bugs and programs. Thirdly, we conducted an experiment to evaluate the causes of invalid inputs in software practice. Lastly, we study how to learn input distributions from real-world sample inputs, using probabilistic grammars. To bridge the gap between software practice and the state of the art in software testing and debugging, we proffer the following empirical results and techniques: (1) We collect evidence on the state of practice in program debugging and indeed, we found that there is a chasm between (available) debugging tools and developer needs. We elicit the actual needs and concerns of developers when testing and diagnosing real faults and provide a benchmark (called DBGBench) to aid the automated evaluation of debugging and repair tools. (2) We provide empirical evidence on the effectiveness of several state-of-the-art AFL techniques (such as statistical debugging formulas and dynamic slicing). Building on the obtained empirical evidence, we provide a hybrid approach that outperforms the state-of-the-art AFL techniques. (3) We evaluate the prevalence and causes of invalid inputs in software practice, and we build on the lessons learned from this experiment to build a general-purpose algorithm (called ddmax) that automatically diagnoses and repairs real-world invalid inputs. (4) We provide a method to learn the distribution of input elements in software practice using probabilistic grammars and we further employ the learned distribution to drive the test generation of inputs that are similar (or dissimilar) to sample inputs found in the wild. In summary, we propose an evidence-driven approach to software testing and debugging, which is based on collecting empirical evidence from software practice to guide and direct software testing and debugging. In our evaluation, we found that our approach is effective in improving the effectiveness of several debugging activities in practice. In particular, using our evidence-driven approach, we elicit the actual debugging needs of developers, improve the effectiveness of several automated fault localization techniques, effectively debug and repair invalid inputs, and generate test inputs that are (dis)similar to real-world inputs. Our proposed methods are built on empirical evidence and they improve over the state-of-the-art techniques in testing and debugging.Software-Debugging bezeichnet das Testen, Aufspüren, Reproduzieren, Diagnostizieren und das Beheben von Fehlern in Programmen. Es wurden bereits viele Debugging-Techniken vorgestellt, die Softwareentwicklern beim Testen und Debuggen unterstützen. Dennoch hat sich in der Forschung gezeigt, dass Entwickler diese Techniken in der Praxis kaum anwenden oder adaptieren. Das könnte daran liegen, dass es einen großen Abstand zwischen den vorgestellten und in der Praxis tatsächlich genutzten Techniken gibt. Die meisten Techniken genügen den Anforderungen der Entwickler nicht. In dieser Dissertation stellen wir die folgende wissenschaftliche Frage: Wie können wir die Kluft zwischen Software-Praxis und den aktuellen wissenschaftlichen Techniken für automatisiertes Testen und Debugging schließen? Um diese Herausforderung anzugehen, stellen wir die folgende These auf: Das Testen und Debuggen von Software sollte von empirischen Daten, die in der Software-Praxis gesammelt wurden, vorangetrieben werden. Genauer gesagt postulieren wir, dass das Feedback aus der Software-Praxis die Automation des Testens und Debuggens formen und bestimmen sollte. In dieser Arbeit fokussieren wir uns auf das Sammeln von Daten aus der Software-Praxis, indem wir einige empirische Studien über das Testen und Debuggen von Software in der echten Welt durchführen. Auf Basis der gesammelten Daten entwickeln wir dann Werkzeuge, die sich auf die Daten der durchgeführten Experimente stützen. Als erstes führen wir eine empirische Studie über den Stand des Debuggens in der Praxis durch, wobei wir eine Umfrage und eine Humanstudie nutzen. In dieser Studie befragen wir Entwickler zu ihren Bedürfnissen, die sie beim Debuggen haben und beobachten die Werkzeuge und Strategien, die sie beim Diagnostizieren, Testen und Aufspüren echter Fehler einsetzen. Als nächstes bewerten wir die Effektivität der aktuellen Automated Fault Localization (AFL)- Methoden zum automatischen Aufspüren von echten Fehlern in echten Programmen. Unser dritter Schritt ist ein Experiment, um die Ursachen von defekten Eingaben in der Software-Praxis zu ermitteln. Zuletzt erforschen wir, wie Häufigkeitsverteilungen von Teileingaben mithilfe einer Grammatik von echten Beispiel-Eingaben aus der Praxis gelernt werden können. Um die Lücke zwischen Software-Praxis und der aktuellen Forschung über Testen und Debuggen von Software zu schließen, bieten wir die folgenden empirischen Ergebnisse und Techniken: (1) Wir sammeln aktuelle Forschungsergebnisse zum Stand des Software-Debuggens und finden in der Tat eine Diskrepanz zwischen (vorhandenen) Debugging-Werkzeugen und dem, was der Entwickler tatsächlich benötigt. Wir sammeln die tatsächlichen Bedürfnisse von Entwicklern beim Testen und Debuggen von Fehlern aus der echten Welt und entwickeln einen Benchmark (DbgBench), um das automatische Evaluieren von Debugging-Werkzeugen zu erleichtern. (2) Wir stellen empirische Daten zur Effektivität einiger aktueller AFL-Techniken vor (z.B. Statistical Debugging-Formeln und Dynamic Slicing). Auf diese Daten aufbauend, stellen wir einen hybriden Algorithmus vor, der die Leistung der aktuellen AFL-Techniken übertrifft. (3) Wir evaluieren die Häufigkeit und Ursachen von ungültigen Eingaben in der Softwarepraxis und stellen einen auf diesen Daten aufbauenden universell einsetzbaren Algorithmus (ddmax) vor, der automatisch defekte Eingaben diagnostiziert und behebt. (4) Wir stellen eine Methode vor, die Verteilung von Schnipseln von Eingaben in der Software-Praxis zu lernen, indem wir Grammatiken mit Wahrscheinlichkeiten nutzen. Die gelernten Verteilungen benutzen wir dann, um den Beispiel-Eingaben ähnliche (oder verschiedene) Eingaben zu erzeugen. Zusammenfassend stellen wir einen auf der Praxis beruhenden Ansatz zum Testen und Debuggen von Software vor, welcher auf empirischen Daten aus der Software-Praxis basiert, um das Testen und Debuggen zu unterstützen. In unserer Evaluierung haben wir festgestellt, dass unser Ansatz effektiv viele Debugging-Disziplinen in der Praxis verbessert. Genauer gesagt finden wir mit unserem Ansatz die genauen Bedürfnisse von Entwicklern, verbessern die Effektivität vieler AFL-Techniken, debuggen und beheben effektiv fehlerhafte Eingaben und generieren Test-Eingaben, die (un)ähnlich zu Eingaben aus der echten Welt sind. Unsere vorgestellten Methoden basieren auf empirischen Daten und verbessern die aktuellen Techniken des Testens und Debuggens
    corecore