300,420 research outputs found
Opaque Service Virtualisation: A Practical Tool for Emulating Endpoint Systems
Large enterprise software systems make many complex interactions with other
services in their environment. Developing and testing for production-like
conditions is therefore a very challenging task. Current approaches include
emulation of dependent services using either explicit modelling or
record-and-replay approaches. Models require deep knowledge of the target
services while record-and-replay is limited in accuracy. Both face
developmental and scaling issues. We present a new technique that improves the
accuracy of record-and-replay approaches, without requiring prior knowledge of
the service protocols. The approach uses Multiple Sequence Alignment to derive
message prototypes from recorded system interactions and a scheme to match
incoming request messages against prototypes to generate response messages. We
use a modified Needleman-Wunsch algorithm for distance calculation during
message matching. Our approach has shown greater than 99% accuracy for four
evaluated enterprise system messaging protocols. The approach has been
successfully integrated into the CA Service Virtualization commercial product
to complement its existing techniques.Comment: In Proceedings of the 38th International Conference on Software
Engineering Companion (pp. 202-211). arXiv admin note: text overlap with
arXiv:1510.0142
A realistic evaluation : the case of protocol-based care
Background
'Protocol based care' was envisioned by policy makers as a mechanism for delivering on the service improvement agenda in England. Realistic evaluation is an increasingly popular approach, but few published examples exist, particularly in implementation research. To fill this gap, within this paper we describe the application of a realistic evaluation approach to the study of protocol-based care, whilst sharing findings of relevance about standardising care through the use of protocols, guidelines, and pathways.
Methods
Situated between positivism and relativism, realistic evaluation is concerned with the identification of underlying causal mechanisms, how they work, and under what conditions. Fundamentally it focuses attention on finding out what works, for whom, how, and in what circumstances.
Results
In this research, we were interested in understanding the relationships between the type and nature of particular approaches to protocol-based care (mechanisms), within different clinical settings (context), and what impacts this resulted in (outcomes). An evidence review using the principles of realist synthesis resulted in a number of propositions, i.e., context, mechanism, and outcome threads (CMOs). These propositions were then 'tested' through multiple case studies, using multiple methods including non-participant observation, interviews, and document analysis through an iterative analysis process. The initial propositions (conjectured CMOs) only partially corresponded to the findings that emerged during analysis. From the iterative analysis process of scrutinising mechanisms, context, and outcomes we were able to draw out some theoretically generalisable features about what works, for whom, how, and what circumstances in relation to the use of standardised care approaches (refined CMOs).
Conclusions
As one of the first studies to apply realistic evaluation in implementation research, it was a good fit, particularly given the growing emphasis on understanding how context influences evidence-based practice. The strengths and limitations of the approach are considered, including how to operationalise it and some of the challenges. This approach provided a useful interpretive framework with which to make sense of the multiple factors that were simultaneously at play and being observed through various data sources, and for developing explanatory theory about using standardised care approaches in practice
When should I use network emulation ?
The design and development of a complex system requires an adequate methodology and efficient instrumental support in order to early detect and correct anomalies in the functional and non-functional properties of the tested protocols. Among the various tools used to provide experimental support for such developments, network emulation relies on real-time production of impairments on real traffic according to a communication model, either realistically or not. This paper aims at simply presenting to newcomers in network emulation (students, engineers, ...) basic principles and practices illustrated with a few commonly used tools. The motivation behind is to fill a gap in terms of introductory and pragmatic papers in this domain. The study particularly considers centralized approaches, allowing cheap and easy implementation in the context of research labs or industrial developments. In addition, an architectural model for emulation systems is proposed, defining three complementary levels, namely hardware, impairment and model levels. With the help of this architectural framework, various existing tools are situated and described. Various approaches for modeling the emulation actions are studied, such as impairment-based scenarios and virtual architectures, real-time discrete simulation and trace-based systems. Those modeling approaches are described and compared in terms of services and we study their ability to respond to various designer needs to assess when emulation is needed
Network emulation focusing on QoS-Oriented satellite communication
This chapter proposes network emulation basics and a complete case study of QoS-oriented Satellite Communication
When Should I Use Network Emulation?
The design and development of a complex system requires an adequate
methodology and efficient instrumental support in order to early detect and
correct anomalies in the functional and non-functional properties of the tested
protocols. Among the various tools used to provide experimental support for
such developments, network emulation relies on real-time production of
impairments on real traffic according to a communication model, either
realistically or not.
This paper aims at simply presenting to newcomers in network emulation
(students, engineers, ...) basic principles and practices illustrated with a
few commonly used tools. The motivation behind is to fill a gap in terms of
introductory and pragmatic papers in this domain.
The study particularly considers centralized approaches, allowing cheap and
easy implementation in the context of research labs or industrial developments.
In addition, an architectural model for emulation systems is proposed, defining
three complementary levels, namely hardware, impairment and model levels. With
the help of this architectural framework, various existing tools are situated
and described. Various approaches for modeling the emulation actions are
studied, such as impairment-based scenarios and virtual architectures,
real-time discrete simulation and trace-based systems. Those modeling
approaches are described and compared in terms of services and we study their
ability to respond to various designer needs to assess when emulation is
needed
Architecting specifications for test case generation
The Specification and Description Language (SDL) together with its associated tool sets can be used for the generation of Tree and Tabular Combined Notation (TTCN) test cases. Surprisingly, little documentation exists on the optimal way to specify systems so that they can best be used for the generation of tests. This paper, elaborates on the different tool supported approaches that can be taken for test case generation and highlights their advantages and disadvantages. A rule based SDL specification style is then presented that facilitates the automatic generation of tests
On the Automated Synthesis of Enterprise Integration Patterns to Adapt Choreography-based Distributed Systems
The Future Internet is becoming a reality, providing a large-scale computing
environments where a virtually infinite number of available services can be
composed so to fit users' needs. Modern service-oriented applications will be
more and more often built by reusing and assembling distributed services. A key
enabler for this vision is then the ability to automatically compose and
dynamically coordinate software services. Service choreographies are an
emergent Service Engineering (SE) approach to compose together and coordinate
services in a distributed way. When mismatching third-party services are to be
composed, obtaining the distributed coordination and adaptation logic required
to suitably realize a choreography is a non-trivial and error prone task.
Automatic support is then needed. In this direction, this paper leverages
previous work on the automatic synthesis of choreography-based systems, and
describes our preliminary steps towards exploiting Enterprise Integration
Patterns to deal with a form of choreography adaptation.Comment: In Proceedings FOCLASA 2015, arXiv:1512.0694
- …