31,784 research outputs found

    Analysis and Verification of Service Interaction Protocols - A Brief Survey

    Get PDF
    Modeling and analysis of interactions among services is a crucial issue in Service-Oriented Computing. Composing Web services is a complicated task which requires techniques and tools to verify that the new system will behave correctly. In this paper, we first overview some formal models proposed in the literature to describe services. Second, we give a brief survey of verification techniques that can be used to analyse services and their interaction. Last, we focus on the realizability and conformance of choreographies.Comment: In Proceedings TAV-WEB 2010, arXiv:1009.330

    Tau Be or not Tau Be? - A Perspective on Service Compatibility and Substitutability

    Get PDF
    One of the main open research issues in Service Oriented Computing is to propose automated techniques to analyse service interfaces. A first problem, called compatibility, aims at determining whether a set of services (two in this paper) can be composed together and interact with each other as expected. Another related problem is to check the substitutability of one service with another. These problems are especially difficult when behavioural descriptions (i.e., message calls and their ordering) are taken into account in service interfaces. Interfaces should capture as faithfully as possible the service behaviour to make their automated analysis possible while not exhibiting implementation details. In this position paper, we choose Labelled Transition Systems to specify the behavioural part of service interfaces. In particular, we show that internal behaviours (tau transitions) are necessary in these transition systems in order to detect subtle errors that may occur when composing a set of services together. We also show that tau transitions should be handled differently in the compatibility and substitutability problem: the former problem requires to check if the compatibility is preserved every time a tau transition is traversed in one interface, whereas the latter requires a precise analysis of tau branchings in order to make the substitution preserve the properties (e.g., a compatibility notion) which were ensured before replacement.Comment: In Proceedings WCSI 2010, arXiv:1010.233

    Conditions, constraints and contracts: on the use of annotations for policy modeling.

    Get PDF
    Organisational policies express constraints on generation and processing of resources. However, application domains rely on transformation processes, which are in principle orthogonal to policy specifications and domain rules and policies may evolve in a non-synchronised way. In previous papers, we have proposed annotations as a flexible way to model aspects of some policy, and showed how they could be used to impose constraints on domain configurations, how to derive application conditions on transformations, and how to annotate complex patterns. We extend the approach by: allowing domain model elements to be annotated with collections of elements, which can be collectively applied to individual resources or collections thereof; proposing an original construction to solve the problem of annotations remaining orphan , when annotated resources are consumed; introducing a notion of contract, by which a policy imposes additional pre-conditions and post-conditions on rules for deriving new resources. We discuss a concrete case study of linguistic resources, annotated with information on the licenses under which they can be used. The annotation framework allows forms of reasoning such as identifying conflicts among licenses, enforcing the presence of licenses, or ruling out some modifications of a licence configuration

    Multilevel Contracts for Trusted Components

    Full text link
    This article contributes to the design and the verification of trusted components and services. The contracts are declined at several levels to cover then different facets, such as component consistency, compatibility or correctness. The article introduces multilevel contracts and a design+verification process for handling and analysing these contracts in component models. The approach is implemented with the COSTO platform that supports the Kmelia component model. A case study illustrates the overall approach.Comment: In Proceedings WCSI 2010, arXiv:1010.233

    Contract Aware Components, 10 years after

    Get PDF
    The notion of contract aware components has been published roughly ten years ago and is now becoming mainstream in several fields where the usage of software components is seen as critical. The goal of this paper is to survey domains such as Embedded Systems or Service Oriented Architecture where the notion of contract aware components has been influential. For each of these domains we briefly describe what has been done with this idea and we discuss the remaining challenges.Comment: In Proceedings WCSI 2010, arXiv:1010.233

    Semantic Component Composition

    Full text link
    Building complex software systems necessitates the use of component-based architectures. In theory, of the set of components needed for a design, only some small portion of them are "custom"; the rest are reused or refactored existing pieces of software. Unfortunately, this is an idealized situation. Just because two components should work together does not mean that they will work together. The "glue" that holds components together is not just technology. The contracts that bind complex systems together implicitly define more than their explicit type. These "conceptual contracts" describe essential aspects of extra-system semantics: e.g., object models, type systems, data representation, interface action semantics, legal and contractual obligations, and more. Designers and developers spend inordinate amounts of time technologically duct-taping systems to fulfill these conceptual contracts because system-wide semantics have not been rigorously characterized or codified. This paper describes a formal characterization of the problem and discusses an initial implementation of the resulting theoretical system.Comment: 9 pages, submitted to GCSE/SAIG '0

    Threats Management Throughout the Software Service Life-Cycle

    Full text link
    Software services are inevitably exposed to a fluctuating threat picture. Unfortunately, not all threats can be handled only with preventive measures during design and development, but also require adaptive mitigations at runtime. In this paper we describe an approach where we model composite services and threats together, which allows us to create preventive measures at design-time. At runtime, our specification also allows the service runtime environment (SRE) to receive alerts about active threats that we have not handled, and react to these automatically through adaptation of the composite service. A goal-oriented security requirements modelling tool is used to model business-level threats and analyse how they may impact goals. A process flow modelling tool, utilising Business Process Model and Notation (BPMN) and standard error boundary events, allows us to define how threats should be responded to during service execution on a technical level. Throughout the software life-cycle, we maintain threats in a centralised threat repository. Re-use of these threats extends further into monitoring alerts being distributed through a cloud-based messaging service. To demonstrate our approach in practice, we have developed a proof-of-concept service for the Air Traffic Management (ATM) domain. In addition to the design-time activities, we show how this composite service duly adapts itself when a service component is exposed to a threat at runtime.Comment: In Proceedings GraMSec 2014, arXiv:1404.163

    Reasoning About a Service-oriented Programming Paradigm

    Full text link
    This paper is about a new way for programming distributed applications: the service-oriented one. It is a concept paper based upon our experience in developing a theory and a language for programming services. Both the theoretical formalization and the language interpreter showed us the evidence that a new programming paradigm exists. In this paper we illustrate the basic features it is characterized by

    Automated robotic liquid handling assembly of modular DNA devices

    Get PDF
    Recent advances in modular DNA assembly techniques have enabled synthetic biologists to test significantly more of the available "design space" represented by "devices" created as combinations of individual genetic components. However, manual assembly of such large numbers of devices is time-intensive, error-prone, and costly. The increasing sophistication and scale of synthetic biology research necessitates an efficient, reproducible way to accommodate large-scale, complex, and high throughput device construction. Here, a DNA assembly protocol using the Type-IIS restriction endonuclease based Modular Cloning (MoClo) technique is automated on two liquid-handling robotic platforms. Automated liquid-handling robots require careful, often times tedious optimization of pipetting parameters for liquids of different viscosities (e.g. enzymes, DNA, water, buffers), as well as explicit programming to ensure correct aspiration and dispensing of DNA parts and reagents. This makes manual script writing for complex assemblies just as problematic as manual DNA assembly, and necessitates a software tool that can automate script generation. To this end, we have developed a web-based software tool, http://mocloassembly.com, for generating combinatorial DNA device libraries from basic DNA parts uploaded as Genbank files. We provide access to the tool, and an export file from our liquid handler software which includes optimized liquid classes, labware parameters, and deck layout. All DNA parts used are available through Addgene, and their digital maps can be accessed via the Boston University BDC ICE Registry. Together, these elements provide a foundation for other organizations to automate modular cloning experiments and similar protocols. The automated DNA assembly workflow presented here enables the repeatable, automated, high-throughput production of DNA devices, and reduces the risk of human error arising from repetitive manual pipetting. Sequencing data show the automated DNA assembly reactions generated from this workflow are ~95% correct and require as little as 4% as much hands-on time, compared to manual reaction preparation
    corecore