105,590 research outputs found

    Xcd - Modular, Realizable Software Architectures

    Get PDF
    Connector-Centric Design (Xcd) is centred around a new formal architectural description language, focusing mainly on complex connectors. Inspired by Wright and BIP, Xcd aims to cleanly separate in a modular manner the high-level functional, interaction, and control system behaviours. This can aid in both increasing the understandability of architectural specifications and the reusability of components and connectors themselves. Through the independent specification of control behaviours, Xcd allows designers to experiment more easily with different design decisions early on, without having to modify the functional behaviour specifications (components) or the interaction ones(connectors). At the same time Xcd attempts to ease the architectural specification by following (and extending) a Design-by-Contract approach, which is more familiar to software developers than process algebras like CSP or languages like BIP that are closer to synchronous/hardware specification languages. Xcd extends Design-by-Contract (i) by separating component contracts into functional and interaction sub-contracts, and (ii) by allowing service consumers to specify their own contractual clauses. Xcd connector specifications are completely decentralized, foregoing Wright’s connector glue, to ensure their realizability by construction

    Specifying Logic Programs in Controlled Natural Language

    Full text link
    Writing specifications for computer programs is not easy since one has to take into account the disparate conceptual worlds of the application domain and of software development. To bridge this conceptual gap we propose controlled natural language as a declarative and application-specific specification language. Controlled natural language is a subset of natural language that can be accurately and efficiently processed by a computer, but is expressive enough to allow natural usage by non-specialists. Specifications in controlled natural language are automatically translated into Prolog clauses, hence become formal and executable. The translation uses a definite clause grammar (DCG) enhanced by feature structures. Inter-text references of the specification, e.g. anaphora, are resolved with the help of discourse representation theory (DRT). The generated Prolog clauses are added to a knowledge base. We have implemented a prototypical specification system that successfully processes the specification of a simple automated teller machine.Comment: 16 pages, compressed, uuencoded Postscript, published in Proceedings CLNLP 95, COMPULOGNET/ELSNET/EAGLES Workshop on Computational Logic for Natural Language Processing, Edinburgh, April 3-5, 199

    Automating SLA-Driven API Development with SLA4OAI

    Get PDF
    The OpenAPI Specification (OAS) is the de facto standard to describe RESTful APIs from a functional perspective. OAS has been a success due to its simple model and the wide ecosystem of tools supporting the SLA-Driven API development lifecycle. Unfortunately, the current OAS scope ignores crucial information for an API such as its Service Level Agreement (SLA). Therefore, in terms of description and management of non-functional information, the disadvantages of not having a standard include the vendor lock-in and prevent the ecosystem to grow and handle extra functional aspects. In this paper, we present SLA4OAI, pioneering in extending OAS not only allowing the specification of SLAs, but also supporting some stages of the SLA-Driven API lifecycle with an open-source ecosystem. Finally, we validate our proposal having modeled 5488 limitations in 148 plans of 35 real-world APIs and show an initial interest from the industry with 600 and 1900 downloads and installs of the SLA Instrumentation Library and the SLA Engine.Ministerio de Economía y Competitividad TIN2015-70560-RMinisterio de Ciencia, Innovación y Universidades RTI2018-101204-B-C21Ministerio de Educación, Cultura y Deporte FPU15/0298

    Extensible Technology-Agnostic Runtime Verification

    Full text link
    With numerous specialised technologies available to industry, it has become increasingly frequent for computer systems to be composed of heterogeneous components built over, and using, different technologies and languages. While this enables developers to use the appropriate technologies for specific contexts, it becomes more challenging to ensure the correctness of the overall system. In this paper we propose a framework to enable extensible technology agnostic runtime verification and we present an extension of polyLarva, a runtime-verification tool able to handle the monitoring of heterogeneous-component systems. The approach is then applied to a case study of a component-based artefact using different technologies, namely C and Java.Comment: In Proceedings FESCA 2013, arXiv:1302.478

    Experiments towards model-based testing using Plan 9: Labelled transition file systems, stacking file systems, on-the-fly coverage measuring

    Get PDF
    We report on experiments that we did on Plan 9/Inferno to gain more experience with the file-system-as-tool-interface approach. We reimplemented functionality that we earlier worked on in Unix, trying to use Plan 9 file system interfaces. The application domain for those experiments was model-based testing.\ud \ud The idea we wanted to experiment with consists of building small, reusable pieces of functionality which are then composed to achieve the intended functionality. In particular we want to experiment with the idea of 'stacking' file servers (fs) on top of each other, where the upper fs acts as a 'filter' on the data and structure provided by the lower fs.\ud \ud For this experiment we designed a file system interface (ltsfs) that gives fine-grained access to a labelled transition system, and made two implementations of it.\ud We developed a small fs that, when 'stacked' on top of the ltsfs, extends it with additional files, and an application that uses the resulting file system.\ud \ud The hope was that an interface like the one offered by ltsfs could be used as a general interface between (specification language specific) programs that give access to state spaces and (specification language independent) programs that use (walk) those state spaces like simulators, model checkers, or test derivation programs.\ud \ud Initial results (obtained on a less-than-modern machine) suggest that, although the approach by itself is definitely feasible in principle, in practice the fine-grained access offered by ltsfs may involve many file (9p) transactions which may seriously affect performance. In Unix we used a more conservative approach where the access was less fine-grained which likely explains why there we did not suffer from this problem.\ud \ud In addition we report on experiments to use acid to obtain coverage information that is updated on-the-fly while the program is running. This worked quite well. The main observation from those experiments is that the basic block notion of this approach, which has a more 'semantical' nature, differs from the more 'syntactical' nature of the basic block notion in Unix coverage measurement tools\ud like tcov or gcov
    corecore