5,809 research outputs found

    Controllability problems in MSC-based testing

    Get PDF
    This is a pre-copyedited, author-produced PDF of an article accepted for publication in The Computer Journal following peer review. The definitive publisher-authenticated version [Dan, H and Hierons, RM (2012), "Controllability Problems in MSC-Based Testing", The Computer Journal, 55(11), 1270-1287] is available online at: http://comjnl.oxfordjournals.org/content/55/11/1270. Copyright @ The Authors 2011.In testing systems with distributed interfaces/ports, we may place a separate tester at each port. It is known that this approach can introduce controllability problems which have received much attention in testing from finite state machines. Message sequence charts (MSCs) form an alternative, commonly used, language for modelling distributed systems. However, controllability problems in testing from MSCs have not been thoroughly investigated. In this paper, controllability problems in MSC test cases are analysed with three notions of observability: local, tester and global. We identify two types of controllability problem in MSC-based testing. It transpires that each type of controllability problem is related to a type of MSC pathology. Controllability problems of timing are caused by races but not every race causes controllability problems; controllability problems of choice are caused by non-local choices and not every non-local choice causes controllability problems. We show that some controllability problems of timing are avoidable and some controllability problems of choice can be overcome when testers have better observational power. Algorithms are provided to tackle both types of controllability problems. Finally, we show how one can overcome controllability problems using a coordination service with status messages based on algorithms developed in this paper.EPSR

    DiPerF: an automated DIstributed PERformance testing Framework

    Full text link
    We present DiPerF, a distributed performance testing framework, aimed at simplifying and automating service performance evaluation. DiPerF coordinates a pool of machines that test a target service, collects and aggregates performance metrics, and generates performance statistics. The aggregate data collected provide information on service throughput, on service "fairness" when serving multiple clients concurrently, and on the impact of network latency on service performance. Furthermore, using this data, it is possible to build predictive models that estimate a service performance given the service load. We have tested DiPerF on 100+ machines on two testbeds, Grid3 and PlanetLab, and explored the performance of job submission services (pre WS GRAM and WS GRAM) included with Globus Toolkit 3.2.Comment: 8 pages, 8 figures, will appear in IEEE/ACM Grid2004, November 200

    Conformance relations for distributed testing based on CSP

    Get PDF
    Copyright @ 2011 Springer Berlin HeidelbergCSP is a well established process algebra that provides comprehensive theoretical and practical support for refinement-based design and verification of systems. Recently, a testing theory for CSP has also been presented. In this paper, we explore the problem of testing from a CSP specification when observations are made by a set of distributed testers. We build on previous work on input-output transition systems, but the use of CSP leads to significant differences, since some of its conformance (refinement) relations consider failures as well as traces. In addition, we allow events to be observed by more than one tester. We show how the CSP notions of refinement can be adapted to distributed testing. We consider two contexts: when the testers are entirely independent and when they can cooperate. Finally, we give some preliminary results on test-case generation and the use of coordination messages. © 2011 IFIP International Federation for Information Processing

    What Causes My Test Alarm? Automatic Cause Analysis for Test Alarms in System and Integration Testing

    Full text link
    Driven by new software development processes and testing in clouds, system and integration testing nowadays tends to produce enormous number of alarms. Such test alarms lay an almost unbearable burden on software testing engineers who have to manually analyze the causes of these alarms. The causes are critical because they decide which stakeholders are responsible to fix the bugs detected during the testing. In this paper, we present a novel approach that aims to relieve the burden by automating the procedure. Our approach, called Cause Analysis Model, exploits information retrieval techniques to efficiently infer test alarm causes based on test logs. We have developed a prototype and evaluated our tool on two industrial datasets with more than 14,000 test alarms. Experiments on the two datasets show that our tool achieves an accuracy of 58.3% and 65.8%, respectively, which outperforms the baseline algorithms by up to 13.3%. Our algorithm is also extremely efficient, spending about 0.1s per cause analysis. Due to the attractive experimental results, our industrial partner, a leading information and communication technology company in the world, has deployed the tool and it achieves an average accuracy of 72% after two months of running, nearly three times more accurate than a previous strategy based on regular expressions.Comment: 12 page

    Using schedulers to test probabilistic distributed systems

    Get PDF
    This is the author's accepted manuscript. The final publication is available at Springer via http://dx.doi.org/10.1007/s00165-012-0244-5. Copyright © 2012, British Computer Society.Formal methods are one of the most important approaches to increasing the confidence in the correctness of software systems. A formal specification can be used as an oracle in testing since one can determine whether an observed behaviour is allowed by the specification. This is an important feature of formal testing: behaviours of the system observed in testing are compared with the specification and ideally this comparison is automated. In this paper we study a formal testing framework to deal with systems that interact with their environment at physically distributed interfaces, called ports, and where choices between different possibilities are probabilistically quantified. Building on previous work, we introduce two families of schedulers to resolve nondeterministic choices among different actions of the system. The first type of schedulers, which we call global schedulers, resolves nondeterministic choices by representing the environment as a single global scheduler. The second type, which we call localised schedulers, models the environment as a set of schedulers with there being one scheduler for each port. We formally define the application of schedulers to systems and provide and study different implementation relations in this setting

    Scenarios-based testing of systems with distributed ports

    Get PDF
    Copyright @ 2011 John Wiley & SonsDistributed systems are usually composed of several distributed components that communicate with their environment through specific ports. When testing such a system we separately observe sequences of inputs and outputs at each port rather than a global sequence and potentially cannot reconstruct the global sequence that occurred. Typically, the users of such a system cannot synchronise their actions during use or testing. However, the use of the system might correspond to a sequence of scenarios, where each scenario involves a sequence of interactions with the system that, for example, achieves a particular objective. When this is the case there is the potential for there to be a significant delay between two scenarios and this effectively allows the users of the system to synchronise between scenarios. If we represent the specification of the global system by using a state-based notation, we say that a scenario is any sequence of events that happens between two of these operations. We can encode scenarios in two different ways. The first approach consists of marking some of the states of the specification to denote these synchronisation points. It transpires that there are two ways to interpret such models and these lead to two implementation relations. The second approach consists of adding a set of traces to the specification to represent the traces that correspond to scenarios. We show that these two approaches have similar expressive power by providing an encoding from marked states to sets of traces. In order to assess the appropriateness of our new framework, we show that it represents a conservative extension of previous implementation relations defined in the context of the distributed test architecture: if we onsider that all the states are marked then we simply obtain ioco (the classical relation for single-port systems) while if no state is marked then we obtain dioco (our previous relation for multi-port systems). Finally, we concentrate on the study of controllable test cases, that is, test cases such that each local tester knows exactly when to apply inputs. We give two notions of controllable test cases, define an implementation relation for each of these notions, and relate them. We also show how we can decide whether a test case satisfies these conditions.Research partially supported by the Spanish MEC project TESIS (TIN2009-14312-C02-01), the UK EPSRC project Testing of Probabilistic and Stochastic Systems (EP/G032572/1), and the UCM-BSCH programme to fund research groups (GR58/08 - group number 910606)

    Overcoming controllability problems in distributed testing from an input output transition system

    Get PDF
    This is the Pre-print version of the Article. The official published version can be accessed from the link below - Copyright @ 2012 Springer VerlagThis paper concerns the testing of a system with physically distributed interfaces, called ports, at which it interacts with its environment. We place a tester at each port and the tester at port p observes events at p only. This can lead to controllability problems, where the observations made by the tester at a port p are not sufficient for it to be able to know when to send an input. It is known that there are test objectives, such as executing a particular transition, that cannot be achieved if we restrict attention to test cases that have no controllability problems. This has led to interest in schemes where the testers at the individual ports send coordination messages to one another through an external communications network in order to overcome controllability problems. However, such approaches have largely been studied in the context of testing from a deterministic finite state machine. This paper investigates the use of coordination messages to overcome controllability problems when testing from an input output transition system and gives an algorithm for introducing sufficient messages. It also proves that the problem of minimising the number of coordination messages used is NP-hard

    Hybrid Simulation and Test of Vessel Traffic Systems on the Cloud

    Get PDF
    This paper presents a cloud-based hybrid simulation platform to test large-scale distributed System-of-Systems (SoS) for the management and control of maritime traffic, the so-called Vessel Traffic Systems (VTS). A VTS consists of multiple, heterogeneous, distributed and interoperating systems, including radar, automatic identification systems, direction finders, electro-optical sensors, gateways to external VTSs, information systems; identifying, representing and analyzing interactions is a challenge to the evaluation of the real risks for safety and security of the marine environment. The need for reproducing in fabric the system behaviors that could occur in situ demands for the ability of integrating emulated and simulated environments to cope with the different testability requirements of involved systems and to keep testing cost sustainable. The platform exploits hybrid simulation and virtualization technologies, and it is deployable on a private cloud, reducing the cost of setting up realistic and effective testing scenarios
    corecore