270 research outputs found

    Constraint-based reachability

    Get PDF
    Iterative imperative programs can be considered as infinite-state systems computing over possibly unbounded domains. Studying reachability in these systems is challenging as it requires to deal with an infinite number of states with standard backward or forward exploration strategies. An approach that we call Constraint-based reachability, is proposed to address reachability problems by exploring program states using a constraint model of the whole program. The keypoint of the approach is to interpret imperative constructions such as conditionals, loops, array and memory manipulations with the fundamental notion of constraint over a computational domain. By combining constraint filtering and abstraction techniques, Constraint-based reachability is able to solve reachability problems which are usually outside the scope of backward or forward exploration strategies. This paper proposes an interpretation of classical filtering consistencies used in Constraint Programming as abstract domain computations, and shows how this approach can be used to produce a constraint solver that efficiently generates solutions for reachability problems that are unsolvable by other approaches.Comment: In Proceedings Infinity 2012, arXiv:1302.310

    \u3ci\u3eCorrect Reasoning: Essays on Logic-Based AI in Honour of Vladimir Lifschitz\u3c/i\u3e

    Get PDF
    Co-edited by Yuliya Lierler, UNO faculty member. Essay, Parsing Combinatory Categorial Grammar via Planning in Answer Set Programming, co-authored by Yuliya Lierler, UNO faculty member. This Festschrift published in honor of Vladimir Lifschitz on the occasion of his 65th birthday presents 39 articles by colleagues from all over the world with whom Vladimir Lifschitz had cooperation in various respects. The 39 contributions reflect the breadth and the depth of the work of Vladimir Lifschitz in logic programming, circumscription, default logic, action theory, causal reasoning and answer set programming.https://digitalcommons.unomaha.edu/facultybooks/1231/thumbnail.jp

    Automatic Test Generation for Space

    Get PDF
    The European Space Agency (ESA) uses an engine to perform tests in the Ground Segment infrastructure, specially the Operational Simulator. This engine uses many different tools to ensure the development of regression testing infrastructure and these tests perform black-box testing to the C++ simulator implementation. VST (VisionSpace Technologies) is one of the companies that provides these services to ESA and they need a tool to infer automatically tests from the existing C++ code, instead of writing manually scripts to perform tests. With this motivation in mind, this paper explores automatic testing approaches and tools in order to propose a system that satisfies VST needs

    Parallel programming using functional languages

    Get PDF
    It has been argued for many years that functional programs are well suited to parallel evaluation. This thesis investigates this claim from a programming perspective; that is, it investigates parallel programming using functional languages. The approach taken has been to determine the minimum programming which is necessary in order to write efficient parallel programs. This has been attempted without the aid of clever compile-time analyses. It is argued that parallel evaluation should be explicitly expressed, by the programmer, in programs. To do achieve this a lazy functional language is extended with parallel and sequential combinators. The mathematical nature of functional languages means that programs can be formally derived by program transformation. To date, most work on program derivation has concerned sequential programs. In this thesis Squigol has been used to derive three parallel algorithms. Squigol is a functional calculus from program derivation, which is becoming increasingly popular. It is shown that some aspects of Squigol are suitable for parallel program derivation, while others aspects are specifically orientated towards sequential algorithm derivation. In order to write efficient parallel programs, parallelism must be controlled. Parallelism must be controlled in order to limit storage usage, the number of tasks and the minimum size of tasks. In particular over-eager evaluation or generating excessive numbers of tasks can consume too much storage. Also, tasks can be too small to be worth evaluating in parallel. Several program techniques for parallelism control were tried. These were compared with a run-time system heuristic for parallelism control. It was discovered that the best control was effected by a combination of run-time system and programmer control of parallelism. One of the problems with parallel programming using functional languages is that non-deterministic algorithms cannot be expressed. A bag (multiset) data type is proposed to allow a limited form of non-determinism to be expressed. Bags can be given a non-deterministic parallel implementation. However, providing the operations used to combine bag elements are associative and commutative, the result of bag operations will be deterministic. The onus is on the programmer to prove this, but usually this is not difficult. Also bags' insensitivity to ordering means that more transformations are directly applicable than if, say, lists were used instead. It is necessary to be able to reason about and measure the performance of parallel programs. For example, sometimes algorithms which seem intuitively to be good parallel ones, are not. For some higher order functions it is posible to devise parameterised formulae describing their performance. This is done for divide and conquer functions, which enables constraints to be formulated which guarantee that they have a good performance. Pipelined parallelism is difficult to analyse. Therefore a formal semantics for calculating the performance of pipelined programs is devised. This is used to analyse the performance of a pipelined Quicksort. By treating the performance semantics as a set of transformation rules, the simulation of parallel programs may be achieved by transforming programs. Some parallel programs perform poorly due to programming errors. A pragmatic method of debugging such programming errors is illustrated by some examples

    An Embedded Domain Specific Language to Model, Transform and Quality Assure Business Processes in Business-Driven Development

    Get PDF
    Business process models are produced by business analysts to graphically communicate the business requirements to IT specialists. As business processes are updated to meet the new demands in the competitive market, the underlying IT solution is adapted, to reflect precisely the current goals of the organisation. The models should then act as an abstract representation of the solution. It is essential to adapt to Business-Driven Development (BDD), whereby models are refined into the IT solution and implemented in a Service-Oriented Architecture. This means that models must be free from data and control-flow errors, such as deadlocks. If models are not quality assured at the modelling phase, errors would be discovered later and the entire BDD lifecycle would have to be repeated. Combining model transformations with quality assurance would help modellers to preserve the correctness of models and rapidly carry out modifications. Although various modelling languages have been developed to assist modellers in the production of high quality business process models, none of them adopted a functional approach, based on higher-order logic. As BDD is being adopted by most organisations, the need for such a language is becoming more evident. Since specialized functionality is required, a general-purpose language is not really necessary. Instead, a domain-specific language which provides the right abstraction and captures precisely the semantics of the business process modelling domain, should be developed. The definitions of the models would be easy to comprehend and reason about, by anyone who is not necessarily an IT specialist. However, since languages are made up of domain independent and dependent linguistic components, it is more cost effective and feasible to embed the new language in a general-purpose language. In this project we present a domain specific language embedded in the functional language, Haskell, to model, transform and quality assure business processes in Business-Driven Development. By adopting a functional approach, we developed a language: 1) with which various models can rapidly be produced in a concise and abstract manner, 2) allows users to focus on the required behaviour rather than its implementation, 3) ensures that all the required details, to generate the executable code, are specified, 4) the abstract representation can be interpreted, analysed and transformed in various ways, 5) quality assures models by carrying out three types of checks; by Haskell’s type checker, at construction-time through our embedded type system, and by specialised functions that analyse the components in the model. By embedding our language in Haskell, the models, quality assurance checks and transformations are essentially functions which can easily be composed and defined. Connection patterns, defined in the language, play an important role to ensure that definitions are concise, readable and easy to comprehend. Different from other previous modelling tools, users are able to define their own parameterized models and transformations. By generating a directed graph for the models, various types of analysis can be carried out with greater ease. Moreover, quality assurance can be combined to model transformations by declaratively defining pre and post conditions for each transformation. These conditions as well as transformations can easily be composed of other previously defined checks or transformations. With this language, we aim to capture the domain semantics of IBM’s WebSphere Business Modeler Advanced v6.0.2

    Symbolic execution: a semantic approach

    Get PDF
    AbstractThis paper discusses symbolic execution from a semantic point of view, covering both programs and specifications. It defines the denotational semantics of symbolic execution of specifications and programs, and thus introduces a notion of correctness of symbolic execution which applies not just to an individual language but to a wide class of languages, namely those whose semantics can be described in terms of states and state transformations. Also described are the operational semantics of a language as used for symbolic execution.This work also provided the basis of the prototype symbolic execution system SYMBEX which was developed at the University of Manchester as part of the mural project. However, this paper only covers the theoretical foundations used by SYMBEX, but not the system itself

    Efficient implementations of expressive modelling languages

    Get PDF
    This thesis is concerned with modelling languages aimed at assisting with modelling and simulation of systems described in terms of differential equations. These languages can be split into two classes: causal languages, where models are expressed using directed equations; and non-causal languages, where models are expressed using undirected equations. This thesis focuses on two related paradigms: FRP and FHM. FRP is an approach to programming causal time-aware applications that has successfully been used in causal modelling applications; while FHM is an approach to programming non-causal modelling applications. However, both are built on similar principles, namely, the treatment of models as first-class entities, allowing for models to be parametrised by other models or computed at runtime; and support for structurally dynamic models, whose behaviour can change during the simulation. This makes FRP and FHM particularly flexible and expressive approaches to modelling, especially compared to other mainstream languages. Because of their highly expressive and flexible nature, providing efficient implementations of these languages is a challenge. This thesis explores novel implementation techniques aimed at improving the performance of existing implementations of FRP and FHM, and other expressive modelling languages built on similar ideas. In the setting of FRP, this thesis proposes a novel embedded FRP library that uses the implementation approach of synchronous dataflow languages. This allows for significant performance improvement by better handling of the reactive network's topology, which represents a large portion of the runtime in current implementations, especially for applications that make heavy use of continuously varying values, such as modelling applications. In the setting of FHM, this thesis presents the modular compilation of a language based on FHM. Due to inherent difficulties with the simulation of systems of undirected equations, previous implementations of FHM and similarly expressive languages were either interpreted or generated code on the fly using just-in-time compilation, two techniques which have runtime overhead over ahead-of-time compilation. This thesis presents a new method for generating code for equation systems which allows for the separate compilation of FHM models. Compared with current approaches to FRP and FHM implementation, there is greater commonality between the implementation approaches described here, suggesting a possible way forward towards a future non-causal modelling language supporting FRP-like features, resulting in an even more expressive modelling language

    Validating Architectural Feature Descriptions using LOTOS

    Get PDF
    The phases of the ANISE project (Architectural Notions In Service Engineering) are briefly explained with reference to the work reported here. An outline strategy is given for translating ANISE descriptions to LOTOS (Language of Temporal Ordering Specification), thus providing a formal basis. It is shown how modular ANISE descriptions of features can be defined and then merged. Potential feature interactions can be identified statically through structural overlaps. A scenario language is introduced to express validation tests for features in a modular fashion, and a number of examples are given. Scenarios are automatically translated to LOTOS and analysed through LOTOS simulation. This allows features to be validated in isolation, and dynamically in combination with other features. The design of the translation and validation tools is discussed, showing typical results when investigating feature descriptions. The paper concludes with a guide to extending the approach for new features

    JTorX: Exploring Model-Based Testing

    Get PDF
    The overall goal of the work described in this thesis is: ``To design a flexible tool for state-of-the-art model-based derivation and automatic application of black-box tests for reactive systems, usable both for education and outside an academic context.'' From this goal, we derive functional and non-functional design requirements. The core of the thesis is a discussion of the design, in which we show how the functional requirements are fulfilled. In addition, we provide evidence to validate the non-functional requirements, in the form of case studies and responses to a tool user questionnaire. We describe the overall architecture of our tool, and discuss three usage scenarios which are necessary to fulfill the functional requirements: random on-line testing, guided on-line testing, and off-line test derivation and execution. With on-line testing, test derivation and test execution takes place in an integrated manner: a next test step is only derived when it is necessary for execution. With random testing, during test derivation a random walk through the model is done. With guided testing, during test derivation additional (guidance) information is used, to guide the derivation through specific paths in the model. With off-line testing, test derivation and test execution take place as separate activities. In our architecture we identify two major components: a test derivation engine, which synthesizes test primitives from a given model and from optional test guidance information, and a test execution engine, which contains the functionality to connect the test tool to the system under test. We refer to this latter functionality as the ``adapter''. In the description of the test derivation engine, we look at the same three usage scenarios, and we discuss support for visualization, and for dealing with divergence in the model. In the description of the test execution engine, we discuss three example adapter instances, and then generalise this to a general adapter design. We conclude with a description of extensions to deal with symbolic treatment of data and time
    • …
    corecore