109 research outputs found

    Scenarios-based testing of systems with distributed ports

    Get PDF
    Copyright @ 2011 John Wiley & SonsDistributed systems are usually composed of several distributed components that communicate with their environment through specific ports. When testing such a system we separately observe sequences of inputs and outputs at each port rather than a global sequence and potentially cannot reconstruct the global sequence that occurred. Typically, the users of such a system cannot synchronise their actions during use or testing. However, the use of the system might correspond to a sequence of scenarios, where each scenario involves a sequence of interactions with the system that, for example, achieves a particular objective. When this is the case there is the potential for there to be a significant delay between two scenarios and this effectively allows the users of the system to synchronise between scenarios. If we represent the specification of the global system by using a state-based notation, we say that a scenario is any sequence of events that happens between two of these operations. We can encode scenarios in two different ways. The first approach consists of marking some of the states of the specification to denote these synchronisation points. It transpires that there are two ways to interpret such models and these lead to two implementation relations. The second approach consists of adding a set of traces to the specification to represent the traces that correspond to scenarios. We show that these two approaches have similar expressive power by providing an encoding from marked states to sets of traces. In order to assess the appropriateness of our new framework, we show that it represents a conservative extension of previous implementation relations defined in the context of the distributed test architecture: if we onsider that all the states are marked then we simply obtain ioco (the classical relation for single-port systems) while if no state is marked then we obtain dioco (our previous relation for multi-port systems). Finally, we concentrate on the study of controllable test cases, that is, test cases such that each local tester knows exactly when to apply inputs. We give two notions of controllable test cases, define an implementation relation for each of these notions, and relate them. We also show how we can decide whether a test case satisfies these conditions.Research partially supported by the Spanish MEC project TESIS (TIN2009-14312-C02-01), the UK EPSRC project Testing of Probabilistic and Stochastic Systems (EP/G032572/1), and the UCM-BSCH programme to fund research groups (GR58/08 - group number 910606)

    Canonical finite state machines for distributed systems

    Get PDF
    There has been much interest in testing from finite state machines (FSMs) as a result of their suitability for modelling or specifying state-based systems. Where there are multiple ports/interfaces a multi-port FSM is used and in testing, a tester is placed at each port. If the testers cannot communicate with one another directly and there is no global clock then we are testing in the distributed test architecture. It is known that the use of the distributed test architecture can affect the power of testing and recent work has characterised this in terms of local s-equivalence: in the distributed test architecture we can distinguish two FSMs, such as an implementation and a specification, if and only if they are not locally s-equivalent. However, there may be many FSMs that are locally s-equivalent to a given FSM and the nature of these FSMs has not been explored. This paper examines the set of FSMs that are locally s-equivalent to a given FSM M. It shows that there is a unique smallest FSM χmin(M) and a unique largest FSM χmax(M) that are locally s-equivalent to M. Here smallest and largest refer to the set of traces defined by an FSM and thus to its semantics. We also show that for a given FSM M the set of FSMs that are locally s-equivalent to M defines a bounded lattice. Finally, we define an FSM that, amongst all FSMs locally s-equivalent to M, has fewest states. We thus give three alternative canonical FSMs that are locally s-equivalent to an FSM M: one that defines the smallest set of traces, one that defines the largest set of traces, and one with fewest states. All three provide valuable information and the first two can be produced in time that is polynomial in terms of the number of states of M. We prove that the problem of finding an s-equivalent FSM with fewest states is NP-hard in general but can be solved in polynomial time for the special case where there are two ports

    Control-level call differentiation in IMS-based 3G core networks

    Get PDF
    The 3GPP-defined IP Multimedia Subsystem is becoming the de facto standard for IP-based multimedia communication services. It consists of an overlay control and service layer that is deployed on top of IP-based mobile and fixed networks, in order to enable the seamless provisioning of IP multimedia services to 3G users. Service differentiation, which implies the network\u27s ability to distinguish between different classes of traffic (or service) and provide each class with the appropriate treatment, is an important aspect that is considered in 3G networks. In this article, we present a critical review of existing service differentiation solutions and propose a new control-level call differentiation solution for IMS-based 3G core networks. The solution consists of a novel call differentiation scheme, enabling the definition of various categories of calls with different QoS profiles. To enable the support of such profiles, an extended IMS architecture, relying on two adaptive resource management mechanisms, is proposed. Furthermore, simulations are used to evaluate the system performance. Compared to existing service differentiation solutions, our solution offers several benefits, such as: flexible QoS negotiation mechanisms, control over many communication aspects as means for differentiation, and a dynamic and adaptive resource management strategy. © 2011 IEEE

    Open virtual playground: Initial architecture and results

    Get PDF
    Network virtualization is a promising and technically challenging concept, which enables the dynamic creation of several co-existing logical network instances (or virtual networks) over a shared physical network infrastructure. There are several motivations behind this concept, including: cost-effective sharing of resources; customizable networking solutions; and the convergence of existing network infrastructures. We have previously proposed a new business model for virtual networking environments. In this paper, we use this model as well as concrete use cases as basis for the definition of the Open Virtual Playground - an open virtual multi-services networking architecture in which different levels of services (i.e. essential services, service enablers, service building blocks, and end-user services) offered by various players, can be dynamically discovered, used, and composed. Furthermore, a QoS-enabled VoIP service scenario is used to demonstrate the system operation and preliminary performance measurements are collected. © 2012 IEEE

    An Elastic Hybrid Sensing Platform: Architecture and Research Challenges

    Get PDF
    © 2016 Published by Elsevier B.V. The dynamic provisioning of hybrid sensing services that integrates both WSN and MPS is a promising, yet challenging concept. It does not only widen the spatial sensing coverage, but it also enables different types of sensing nodes to collaboratively perform sensing tasks and complement each other. Furthermore, it allows for the provisioning of a new category of services that was not possible to implement in pure WSN or MPS networks. Offering a hybrid sensing platform as a service results in several benefits including, but no limited to, efficient sharing and dynamic management of sensing nodes, diversification and reuse of sensing services, as well as combination of many sensing paradigms to enable data to be collected from different sources. However, many challenges need to be resolved before such architecture can be feasible. Currently, the deployment of sensing applications and services is a costly and complex process, which also lacks automation. This paper motivates the need for hybrid sensing, sketches an early architecture, and identifies the research issues with few hints on how to solve them. We argue that a sensing platform that reuses the virtualization and cloud computing concepts will help in addressing many of these challenges, and overcome the limitations of today\u27s deployment practices

    Big data quality framework: a holistic approach to continuous quality management

    Get PDF
    Big Data is an essential research area for governments, institutions, and private agencies to support their analytics decisions. Big Data refers to all about data, how it is collected, processed, and analyzed to generate value-added data-driven insights and decisions. Degradation in Data Quality may result in unpredictable consequences. In this case, confidence and worthiness in the data and its source are lost. In the Big Data context, data characteristics, such as volume, multi-heterogeneous data sources, and fast data generation, increase the risk of quality degradation and require efficient mechanisms to check data worthiness. However, ensuring Big Data Quality (BDQ) is a very costly and time-consuming process, since excessive computing resources are required. Maintaining Quality through the Big Data lifecycle requires quality profiling and verification before its processing decision. A BDQ Management Framework for enhancing the pre-processing activities while strengthening data control is proposed. The proposed framework uses a new concept called Big Data Quality Profile. This concept captures quality outline, requirements, attributes, dimensions, scores, and rules. Using Big Data profiling and sampling components of the framework, a faster and efficient data quality estimation is initiated before and after an intermediate pre-processing phase. The exploratory profiling component of the framework plays an initial role in quality profiling; it uses a set of predefined quality metrics to evaluate important data quality dimensions. It generates quality rules by applying various pre-processing activities and their related functions. These rules mainly aim at the Data Quality Profile and result in quality scores for the selected quality attributes. The framework implementation and dataflow management across various quality management processes have been discussed, further some ongoing work on framework evaluation and deployment to support quality evaluation decisions conclude the paper

    A multi-service multi-role integrated information model for dynamic resource discovery in virtual networks

    Get PDF
    Network virtualization is considered as a promising way to overcome the limitations and fight the gradual ossification of the current Internet infrastructure. The network virtualization concept consists in the dynamic creation of several co-existing logical network instances (or virtual networks) over a shared physical network infrastructure. One of the challenges associated with this concept is the dynamic discovery and selection of virtual resources that can be composed to form virtual networks. To achieve that task, there is a need for a formal and expressive information model facilitating information representation and sharing between the various roles/entities involved. We have previously proposed a service-oriented hierarchical business model for virtual networking environments, as well as an architecture enabling its realization. In this paper, we build on this business model and architecture by proposing a multi-service, multi-role hierarchical information model, for virtual networking environments. Furthermore, we demonstrate the usage of this information model using a secure content distribution scenario that is realized using REST interfaces. Unlike other proposals, our integrated information model enables the fine-grained description of virtual networks and virtual networking resources, in addition to the modeling of network services and roles, and their relationships and hierarchy. © 2013 IEEE

    Overcoming controllability problems in distributed testing from an input output transition system

    Get PDF
    This is the Pre-print version of the Article. The official published version can be accessed from the link below - Copyright @ 2012 Springer VerlagThis paper concerns the testing of a system with physically distributed interfaces, called ports, at which it interacts with its environment. We place a tester at each port and the tester at port p observes events at p only. This can lead to controllability problems, where the observations made by the tester at a port p are not sufficient for it to be able to know when to send an input. It is known that there are test objectives, such as executing a particular transition, that cannot be achieved if we restrict attention to test cases that have no controllability problems. This has led to interest in schemes where the testers at the individual ports send coordination messages to one another through an external communications network in order to overcome controllability problems. However, such approaches have largely been studied in the context of testing from a deterministic finite state machine. This paper investigates the use of coordination messages to overcome controllability problems when testing from an input output transition system and gives an algorithm for introducing sufficient messages. It also proves that the problem of minimising the number of coordination messages used is NP-hard

    Formal Specification and Automatic Verification of Conditional Commitments

    Get PDF
    Developing and implementing a model checker dedicated to conditional logic with the user interface are urgent requirements for determining whether agents comply with their commitment protocols

    Conformance relations for distributed testing based on CSP

    Get PDF
    Copyright @ 2011 Springer Berlin HeidelbergCSP is a well established process algebra that provides comprehensive theoretical and practical support for refinement-based design and verification of systems. Recently, a testing theory for CSP has also been presented. In this paper, we explore the problem of testing from a CSP specification when observations are made by a set of distributed testers. We build on previous work on input-output transition systems, but the use of CSP leads to significant differences, since some of its conformance (refinement) relations consider failures as well as traces. In addition, we allow events to be observed by more than one tester. We show how the CSP notions of refinement can be adapted to distributed testing. We consider two contexts: when the testers are entirely independent and when they can cooperate. Finally, we give some preliminary results on test-case generation and the use of coordination messages. © 2011 IFIP International Federation for Information Processing
    corecore