21,108 research outputs found

    A Logical Verification Methodology for Service-Oriented Computing

    Get PDF
    We introduce a logical verification methodology for checking behavioural properties of service-oriented computing systems. Service properties are described by means of SocL, a branching-time temporal logic that we have specifically designed to express in an effective way distinctive aspects of services, such as, e.g., acceptance of a request, provision of a response, and correlation among service requests and responses. Our approach allows service properties to be expressed in such a way that they can be independent of service domains and specifications. We show an instantiation of our general methodology that uses the formal language COWS to conveniently specify services and the expressly developed software tool CMC to assist the user in the task of verifying SocL formulae over service specifications. We demonstrate feasibility and effectiveness of our methodology by means of the specification and the analysis of a case study in the automotive domain

    Using contexts to extract models from code

    No full text
    Behaviour models facilitate the understanding and analysis of software systems by providing an abstract view of their behaviours and also by enabling the use of validation and verification techniques to detect errors. However, depending on the size and complexity of these systems, constructing models may not be a trivial task, even for experienced developers. Model extraction techniques can automatically obtain models from existing code, thus reducing the effort and expertise required of engineers and helping avoid errors often present in manually constructed models. Existing approaches for model extraction often fail to produce faithful models, either because they only consider static information, which may include infeasible behaviours, or because they are based only on dynamic information, thus relying on observed executions, which usually results in incomplete models. This paper describes a model extraction approach based on the concept of contexts, which are abstractions of concrete states of a program, combining static and dynamic information. Contexts merge some of the advantages of using either type of information and, by their combination, can overcome some of their problems. The approach is partially implemented by a tool called LTS Extractor, which translates information collected from execution traces produced by instrumented Java code to labelled transition systems (LTS), which can be analysed in an existing verification tool. Results from case studies are presented and discussed, showing that, considering a certain level of abstraction and a set of execution traces, the produced models are correct descriptions of the programs from which they were extracted. Thus, they can be used for a variety of analyses, such as program understanding, validation, verification, and evolution

    Selling Australia as "clean and green"

    Get PDF
    "Green and clean" has been used as a key marketing tool to promote Australian products overseas. The rationale is that consumers are generally concerned about personal health and the environment and will choose, and pay price premiums, for products that are perceived to be clean (good for them) and green (good for the environment) over alternative products. But is Australia seen as clean and green? Is it really why people buy Australian? This paper attempts to investigate such questionsexport marketing, clean green image, Marketing,

    Extracting Formal Models from Normative Texts

    Full text link
    We are concerned with the analysis of normative texts - documents based on the deontic notions of obligation, permission, and prohibition. Our goal is to make queries about these notions and verify that a text satisfies certain properties concerning causality of actions and timing constraints. This requires taking the original text and building a representation (model) of it in a formal language, in our case the C-O Diagram formalism. We present an experimental, semi-automatic aid that helps to bridge the gap between a normative text in natural language and its C-O Diagram representation. Our approach consists of using dependency structures obtained from the state-of-the-art Stanford Parser, and applying our own rules and heuristics in order to extract the relevant components. The result is a tabular data structure where each sentence is split into suitable fields, which can then be converted into a C-O Diagram. The process is not fully automatic however, and some post-editing is generally required of the user. We apply our tool and perform experiments on documents from different domains, and report an initial evaluation of the accuracy and feasibility of our approach.Comment: Extended version of conference paper at the 21st International Conference on Applications of Natural Language to Information Systems (NLDB 2016). arXiv admin note: substantial text overlap with arXiv:1607.0148

    Test Sequences for Web Service Composition using CPN model

    Get PDF
    Web service composition is most mature and effective way to realize the rapidly changing requirements of business in service-oriented solutions. Testing the compositions of web services is complex, due to their distributed nature and asynchronous behaviour. Colored Petri Nets (CPNs) provide a framework for the design, specification, validation and verification of systems. In this paper the CPN model used for composition design verification is reused for test design purpose. We propose an on-the-fly algorithm that generates a test suite that covers all possible paths without redundancy.  The prioritization of test sequences, test suite size and redundancy reduction are also focused. The proposed technique was applied to air line reservation system and the generated test sequences were evaluated against three coverage criteria; Decision Coverage, Input Output Coverage and Transition Coverage. Keywords— CPN, MBT, web service composition testing, test case generatio

    Verification of a lazy cache coherence protocol against a weak memory model

    Get PDF
    In this paper we verify a modern lazy cache coherence protocol, TSO-CC, against the memory consistency model it was designed for, TSO. We achieve this by first showing a weak simulation relation between TSO-CC (with a fixed number of processors) and a novel finite-state operational model which exhibits the laziness of TSO-CC and satisfies TSO. We then extend this by an existing parameterisation technique, allowing verification for an unlimited number of processors. The approach is executed entirely within a model checker, no external tool is required and very little in-depth knowledge of formal verification methods is required of the verifier.Comment: 10 page

    The post office experience: designing a large asynchronous chip

    Get PDF
    Journal ArticleThe Post Office is an asynchronous, 300,000 transistor, full-custom CMOS chip designed as the communication component for the Mayfly scalable parallel processor. Performance requirements led to the development of a design style which permits the design of sequential circuits operating under a restricted form of multiple input change sign alling called burst-mode. The Post Office complexity forced us to develop a set of design fools capable of correctly synthesizing transistor circuits front state machine and equation specifications, and capable of verifying the correctness of the resultant circuity using implementation specific timing assumptions. The paper provides a case study of this design experience

    Taming Numbers and Durations in the Model Checking Integrated Planning System

    Full text link
    The Model Checking Integrated Planning System (MIPS) is a temporal least commitment heuristic search planner based on a flexible object-oriented workbench architecture. Its design clearly separates explicit and symbolic directed exploration algorithms from the set of on-line and off-line computed estimates and associated data structures. MIPS has shown distinguished performance in the last two international planning competitions. In the last event the description language was extended from pure propositional planning to include numerical state variables, action durations, and plan quality objective functions. Plans were no longer sequences of actions but time-stamped schedules. As a participant of the fully automated track of the competition, MIPS has proven to be a general system; in each track and every benchmark domain it efficiently computed plans of remarkable quality. This article introduces and analyzes the most important algorithmic novelties that were necessary to tackle the new layers of expressiveness in the benchmark problems and to achieve a high level of performance. The extensions include critical path analysis of sequentially generated plans to generate corresponding optimal parallel plans. The linear time algorithm to compute the parallel plan bypasses known NP hardness results for partial ordering by scheduling plans with respect to the set of actions and the imposed precedence relations. The efficiency of this algorithm also allows us to improve the exploration guidance: for each encountered planning state the corresponding approximate sequential plan is scheduled. One major strength of MIPS is its static analysis phase that grounds and simplifies parameterized predicates, functions and operators, that infers knowledge to minimize the state description length, and that detects domain object symmetries. The latter aspect is analyzed in detail. MIPS has been developed to serve as a complete and optimal state space planner, with admissible estimates, exploration engines and branching cuts. In the competition version, however, certain performance compromises had to be made, including floating point arithmetic, weighted heuristic search exploration according to an inadmissible estimate and parameterized optimization

    Selling Australia as 'Clean and Green'

    Get PDF
    'Green and clean' has been used as a key marketing tool to promote Australian products overseas. The rationale is that consumers are generally concerned about personal health and the environment and will choose, and pay price premiums, for products that are, or perceived to be, clean (good for them) and green (good for the environment) over alternative products. But is Australia seen as clean and green? Is it really why people buy Australian products? And how effective is it as a marketing tool? This paper attempts to answer some of these questions. The study found that Australia may have a clean green image at present in some of her overseas markets, but to maintain such an image over time, concrete proof of environmental and quality credentials need to be provided to satisfy increasingly more educated and better-informed consumers. Wide adoption of integrated EMS and QA systems by Australian producers and food companies appears to be a means to establish such credentials and substantiate any 'clean and green' claim. Therefore, government policies should focus more on developing a range of tools to encourage good environmental and quality management practices, rather than on promoting the 'clean and green' image. Such campaigns may be counter-productive in the long run as it leads to complacency, rather than raising environmental and quality awareness.export marketing, clean and green, EMS, QA, Environmental Economics and Policy, International Relations/Trade,
    corecore