869 research outputs found

    Monitoring-Oriented Programming: A Tool-Supported Methodology for Higher Quality Object-Oriented Software

    Get PDF
    This paper presents a tool-supported methodological paradigm for object-oriented software development, called monitoring-oriented programming and abbreviated MOP, in which runtime monitoring is a basic software design principle. The general idea underlying MOP is that software developers insert specifications in their code via annotations. Actual monitoring code is automatically synthesized from these annotations before compilation and integrated at appropriate places in the program, according to user-defined configuration attributes. This way, the specification is checked at runtime against the implementation. Moreover, violations and/or validations of specifications can trigger user-defined code at any points in the program, in particular recovery code, outputting or sending messages, or raising exceptions. The MOP paradigm does not promote or enforce any specific formalism to specify requirements: it allows the users to plug-in their favorite or domain-specific specification formalisms via logic plug-in modules. There are two major technical challenges that MOP supporting tools unavoidably face: monitor synthesis and monitor integration. The former is heavily dependent on the specification formalism and comes as part of the corresponding logic plug-in, while the latter is uniform for all specification formalisms and depends only on the target programming language. An experimental prototype tool, called Java-MOP, is also discussed, which currently supports most but not all of the desired MOP features. MOP aims at reducing the gap between formal specification and implementation, by integrating the two and allowing them together to form a system

    Incremental Dead State Detection in Logarithmic Time

    Full text link
    Identifying live and dead states in an abstract transition system is a recurring problem in formal verification; for example, it arises in our recent work on efficiently deciding regex constraints in SMT. However, state-of-the-art graph algorithms for maintaining reachability information incrementally (that is, as states are visited and before the entire state space is explored) assume that new edges can be added from any state at any time, whereas in many applications, outgoing edges are added from each state as it is explored. To formalize the latter situation, we propose guided incremental digraphs (GIDs), incremental graphs which support labeling closed states (states which will not receive further outgoing edges). Our main result is that dead state detection in GIDs is solvable in O(logm)O(\log m) amortized time per edge for mm edges, improving upon O(m)O(\sqrt{m}) per edge due to Bender, Fineman, Gilbert, and Tarjan (BFGT) for general incremental directed graphs. We introduce two algorithms for GIDs: one establishing the logarithmic time bound, and a second algorithm to explore a lazy heuristics-based approach. To enable an apples-to-apples experimental comparison, we implemented both algorithms, two simpler baselines, and the state-of-the-art BFGT baseline using a common directed graph interface in Rust. Our evaluation shows 110110-530530x speedups over BFGT for the largest input graphs over a range of graph classes, random graphs, and graphs arising from regex benchmarks.Comment: 22 pages + reference

    A Trichotomy for Regular Simple Path Queries on Graphs

    Full text link
    Regular path queries (RPQs) select nodes connected by some path in a graph. The edge labels of such a path have to form a word that matches a given regular expression. We investigate the evaluation of RPQs with an additional constraint that prevents multiple traversals of the same nodes. Those regular simple path queries (RSPQs) find several applications in practice, yet they quickly become intractable, even for basic languages such as (aa)* or a*ba*. In this paper, we establish a comprehensive classification of regular languages with respect to the complexity of the corresponding regular simple path query problem. More precisely, we identify the fragment that is maximal in the following sense: regular simple path queries can be evaluated in polynomial time for every regular language L that belongs to this fragment and evaluation is NP-complete for languages outside this fragment. We thus fully characterize the frontier between tractability and intractability for RSPQs, and we refine our results to show the following trichotomy: Evaluations of RSPQs is either AC0, NL-complete or NP-complete in data complexity, depending on the regular language L. The fragment identified also admits a simple characterization in terms of regular expressions. Finally, we also discuss the complexity of the following decision problem: decide, given a language L, whether finding a regular simple path for L is tractable. We consider several alternative representations of L: DFAs, NFAs or regular expressions, and prove that this problem is NL-complete for the first representation and PSPACE-complete for the other two. As a conclusion we extend our results from edge-labeled graphs to vertex-labeled graphs and vertex-edge labeled graphs.Comment: 15 pages, conference submissio

    Leveraging service-oriented business applications to a rigorous rule-centric dynamic behavioural architecture.

    Get PDF
    Today’s market competitiveness and globalisation are putting pressure on organisations to join their efforts, to focus more on cooperation and interaction and to add value to their businesses. That is, most information systems supporting these cross-organisations are characterised as service-oriented business applications, where all the emphasis is put on inter-service interactions rather than intra-service computations. Unfortunately for the development of such inter-organisational service-oriented business systems, current service technology proposes only ad-hoc, manual and static standard web-service languages such as WSDL, BPEL and WS-CDL [3, 7]. The main objective of the work reported in this thesis is thus to leverage the development of service-oriented business applications towards more reliability and dynamic adaptability, placing emphasis on the use of business rules to govern activities, while composing services. The best available software-engineering techniques for adaptability, mainly aspect-oriented mechanisms, are also to be integrated with advanced formal techniques. More specifically, the proposed approach consists of the following incremental steps. First, it models any business activity behaviour governing any service-oriented business process as Event-Condition-Action (ECA) rules. Then such informal rules are made more interaction-centric, using adapted architectural connectors. Third, still at the conceptual-level, with the aim of adapting such ECA-driven connectors, this approach borrows aspect-oriented ideas and mechanisms, and proposes to intercept events, select the properties required for interacting entities, explicitly and separately execute such ECA-driven behavioural interactions and finally dynamically weave the results into the entities involved. To ensure compliance and to preserve the implementation of this architectural conceptualisation, the work adopts the Maude language as an executable operational formalisation. For that purpose, Maude is first endowed with the notions of components and interfaces. Further, the concept of ECA-driven behavioural interactions are specified and implemented as aspects. Finally, capitalising on Maude reflection, the thesis demonstrates how to weave such interaction executions into associated services

    A TYPE ANALYSIS OF REWRITE STRATEGIES

    Get PDF
    Rewrite strategies provide an algorithmic rewriting of terms using strategic compositions of rewrite rules. Due to the programmability of rewrites, errors are often made due to incorrect compositions of rewrites or incorrect application of rewrites to a term within a strategic rewriting program. In practical applications of strategic rewriting, testing and debugging becomes substantially time-intensive for large programs applied to large inputs derived from large term grammars. In essence, determining which rewrite in what position in a term did or did not re comes down to logging, tracing and/or di -like comparison of inputs to outputs. In this thesis, we explore type-enabled analysis of strategic rewriting programs to detect errors statically. In particular, we introduce high-precision types to closely approximate the dynamic behavior of rewriting. We also use union types to track sets of types due to presence of strategic compositions. In this framework of high-precision strategic typing, we develop and implement an expressive type system for a representative strategic rewriting language TL. The results of this research are sufficiently broad to be adapted to other strategic rewriting languages. In particular, the type-inferencing algorithm does not require explicit type annotations for minimal impact on an existing language. Based on our experience with the implementation, the type system significantly reduces the time and effort to program correct rewrite strategies while performing the analysis on the order of thousands of source lines of code per second
    corecore