5 research outputs found

    Towards Ex Vivo Testing of MapReduce Applications

    Get PDF
    2017 IEEE International Conference on Software Quality, Reliability and Security (QRS), 25-29 July 2017, Prague (Czech Republic)Big Data programs are those that process large data exceeding the capabilities of traditional technologies. Among newly proposed processing models, MapReduce stands out as it allows the analysis of schema-less data in large distributed environments with frequent infrastructure failures. Functional faults in MapReduce are hard to detect in a testing/preproduction environment due to its distributed characteristics. We propose an automatic test framework implementing a novel testing approach called Ex Vivo. The framework employs data from production but executes the tests in a laboratory to avoid side-effects on the application. Faults are detected automatically without human intervention by checking if the same data would generate different outputs with different infrastructure configurations. The framework (MrExist) is validated with a real-world program. MrExist can identify a fault in a few seconds, then the program can be stopped, not only avoiding an incorrect output, but also saving money, time and energy of production resource

    Towards Ex Vivo Testing of MapReduce Applications

    Full text link

    An Extensible Framework for Online Testing of Choreographed Services

    No full text
    Service choreographies present numerous engineering challenges, particularly with respect to testing activities, that traditional design-time approaches cannot properly address. A proposed online testing solution offers a powerful, extensible framework to effectively assess service compositions, leading to a more trustworthy and reliable service ecosystem

    Live Testing of Cloud Services

    Get PDF
    Service providers use the cloud due to the dynamic infrastructure it offers at a low cost. However, sharing the infrastructure with other service providers as well as relying on remote services that may be inaccessible from the development environment create major limitations for development time testing. Modern service providers have an increasing need to test their services in the production environment. Such testing helps increase the reliability of the test results and detect problems that could not be detected in the development environment such as the noisy neighbor problem. Furthermore, testing in production enables other software engineering activities such as fault prediction and fault localization and makes them more efficient. Test interferences are a major problem for testing in production as they can have damaging effects ranging from unreliable and degraded performance to a malfunctioning or inaccessible system. The countermeasures that are taken to alleviate the risk of test interferences are called test isolation. Existing approaches for test isolation have limited applicability in the cloud context because the assumptions under which they operate are seldom satisfied in the cloud context. Moreover, when running tests in production, failures can happen and whether they are due to the testing activity or not the damage they cause cannot be ignored. To deal with such issues and manage to quickly get the system back to a healthy state in the case of a failure, human intervention should be reduced in the orchestration and execution of testing activities in production. Thus, the need for a solution that automates the orchestration of tests in production while taking into consideration the particularity of a cloud system such as the existence of multiple fault tolerance mechanisms. In this thesis, we define live testing as testing a system in its production environment, while it is serving, without causing any intolerable disruption to its usage. We propose an architecture that can help cope with the major challenges of live testing, namely reducing human intervention and providing test isolation. Our proposed architecture is composed of two building blocks, the Test Planner and the Test Execution Framework. To make the solution we are proposing independent from the technologies used in a cloud system, we propose the use of UML Testing Profile (UTP) to model the artifacts involved in this architecture. To reduce human intervention in testing activities, we start by automating test execution and orchestration in production. To achieve this goal, we propose an execution semantics that we associate with UTP concepts that are relevant for test execution. Such an execution semantics represent the behavior that the Test Execution Framework exhibits while executing tests. We propose a test case selection method and test plan generation method to automate the activities that are performed by the Test Planner. To alleviate the risk of test interferences, we also propose a set of test methods that can be used for test isolation. As opposed to existing test isolation techniques, our test methods do not make any assumptions about the parts of the system for which test isolation can be provided, nor about the feature to be tested. These test methods are used in the design of test plans. In fact, the applicability of each test method varies according to several factors including the risk of test interferences that parts of the system present, the availability of resources, and the impact of the test method on the provisioning of the service. To be able to select the right test method for each situation, information about the risk of test interference and the cost of test isolation need to be provided. We propose a method, configured instance evaluation method, that automates the process of obtaining such information. Our method evaluates the software involved in the realization of the system in terms of the risk of test interference it presents, and the cost to provide test isolation for that software. In this thesis, we also discuss the feasibility of our proposed methods and evaluate the provided solutions. We implemented a prototype for the test plan generation and showcased it in a case study. We also implemented a part of the configured instance evaluation method, and we show that it can help confirm the presence of a risk of test interference. We showcase one of our test methods on a case study using an application deployed in a Kubernetes managed cluster. We also provide proof of the soundness of our execution semantics. Furthermore, we evaluate, in terms of the resulting test plan’s execution time, the algorithms involved in the test plan generation method. We show that for two of the activities in our solution our proposed algorithms provide optimal solutions; and, for one activity we identify in which situations our algorithm does not manage to give the optimal solution. Finally, we prove that our test case selection method reduces the test suite without compromising the configuration fault detection power

    Enhancing coverage adequacy of service compositions after runtime adaptation

    Get PDF
    Laufzeitüberwachung (engl. runtime monitoring) ist eine wichtige Qualitätssicherungs-Technik für selbstadaptive Service-Komposition. Laufzeitüberwachung überwacht den Betrieb der Service-Komposition. Zur Bestimmung der Genauigkeit von Software-Tests werden häufig Überdeckungskriterien verwendet. Überdeckungskriterien definieren Anforderungen die Software-Tests erfüllen muss. Wegen ihrer wichtigen Rolle im Software-Testen haben Forscher Überdeckungskriterien an die Laufzeitüberwachung von Service-Komposition angepasst. Die passive Art der Laufzeitüberwachung und die adaptive Art der Service-Komposition können die Genauigkeit von Software-Tests zur Laufzeit negativ beeinflussen. Dies kann jedoch die Zuversicht in der Qualität der Service-Komposition begrenzen. Um die Überdeckung selbstadaptiver Service-Komposition zur Laufzeit zu verbessern, untersucht diese Arbeit, wie die Laufzeitüberwachung und Online-Testen kombiniert werden können. Online-Testen bedeutet dass Testen parallel zu der Verwendung einer Service-Komposition erfolgt. Zunächst stellen wir einen Ansatz vor, um gültige Execution-Traces für Service-Komposition zur Laufzeit zu bestimmen. Der Ansatz berücksichtigt die Execution-Traces von Laufzeitüberwachung und (Online)-Testen. Er berücksichtigt Änderungen im Workflow und Software-Services eines Service-Komposition. Zweitens, definieren wir Überdeckungskriterien für Service-Komposition. Die Überdeckungskriterien berücksichtigen Ausführungspläne einer Service-Komposition und berücksichtigen die Überdeckung für Software-Services und die Service-Komposition. Drittens stellen wir Online-Testfälle Priorisierungs Techniken, um die Abdeckungniveau einer Service-Komposition schneller zu erreichen. Die Techniken berücksichtigen die Überdeckung einer Service-Komposition durch beide Laufzeitüberwachung und Online-Tests. Zusätzlich, berücksichtigen sie die Ausführungszeit von Testfällen und das Nutzungsmodell der Service-Komposition. Viertens stellen wir einen Rahmen für die Laufzeitüberwachung und Online-Testen von Software-Services und Service-Komposition, genannt PROSA, vor. PROSA bietet technische Unterstützung für die oben genannten Beiträge. Wir evaluieren die Beiträge anhand einer beispielhaften Service-Komposition, die häufig in dem Forschungsgebiet Service-oriented Computing eingesetzt wird.Runtime monitoring (or monitoring for short) is a key quality assurance technique for self-adaptive service compositions. Monitoring passively observes the runtime behaviour of service compositions. Coverage criteria are extensively used for assessing the adequacy (or thoroughness) of software testing. Coverage criteria specify certain requirements on software testing. The importance of coverage criteria in software testing has motivated researchers to adapt them to the monitoring of service composition. However, the passive nature of monitoring and the adaptive nature of service composition could negatively influence the adequacy of monitoring, thereby limiting the confidence in the quality of the service composition. To enhance coverage adequacy of self-adaptive service compositions at runtime, this thesis investigates how to combine runtime monitoring and online testing. Online testing means testing a service composition in parallel to its actual usage and operation. First, we introduce an approach for determining valid execution traces for service compositions at runtime. The approach considers execution traces of both monitoring and (online) testing. It considers modifications in both workflow and constituent services of a service composition. Second, we define coverage criteria for service compositions. The criteria consider execution plans of a service composition for coverage assessment and consider the coverage of an abstract service and the overall service composition. Third, we introduce online-test-case prioritization techniques to achieve a faster coverage of a service composition. The techniques employ coverage of a service composition from both monitoring and online testing, execution time of test cases, and the usage model of the service composition. Fourth, we introduce a framework for monitoring and online testing of services and service compositions called PROSA. PROSA provides technical support for the aforementioned contributions. We evaluate the contributions of this thesis using service compositions frequently used in service-oriented computing research
    corecore