171,013 research outputs found

    A Rewriting Framework for Interacting Cyber-Physical Agents

    Full text link
    The analysis of cyber-physical systems (CPS) is challenging due to the large state space and the continuous changes occurring in their constituent parts. Design practices favor modularity to help reducing this complexity. In a previous work, we proposed a discrete semantic model for CPS that captures both cyber and physical aspects as streams of discrete observations, which ultimately form the behavior of a component. This semantic model is denotational and compositional, where each composition operator algebraically models an interaction between a pair of components. In this paper, we propose a specification of components as rewrite systems. The specification is operational and executable, and we study conditions for its semantics as components to be compositional. We demonstrate our framework by modeling a coordination of robots moving on a shared field. We show that our system of robots can be coordinated by a protocol in order to exhibit a desired emerging behavior. We use an implementation of our framework in Maude to give practical results

    A Model-based transformation process to validate and implement high-integrity systems

    Get PDF
    Despite numerous advances, building High-Integrity Embedded systems remains a complex task. They come with strong requirements to ensure safety, schedulability or security properties; one needs to combine multiple analysis to validate each of them. Model-Based Engineering is an accepted solution to address such complexity: analytical models are derived from an abstraction of the system to be built. Yet, ensuring that all abstractions are semantically consistent, remains an issue, e.g. when performing model checking for assessing safety, and then for schedulability using timed automata, and then when generating code. Complexity stems from the high-level view of the model compared to the low-level mechanisms used. In this paper, we present our approach based on AADL and its behavioral annex to refine iteratively an architecture description. Both application and runtime components are transformed into basic AADL constructs which have a strict counterpart in classical programming languages or patterns for verification. We detail the benefits of this process to enhance analysis and code generation. This work has been integrated to the AADL-tool support OSATE2

    A design method for modular energy-aware software

    Get PDF
    Nowadays achieving green software by reducing the overall energy consumption of the software is becoming more and more important. A well-known solution is to make the software energy-aware by extending its functionality with energy optimizers, which monitor the energy consumption of software and adapt it accordingly. Modular design of energy-aware software is necessary to make the extensions manageable and to cope with the complexity of the software. To this aim, we require suitable methods that guide designers through the necessary design activities and the models that must be prepared during each activity. Despite its importance, such a method is not investigated in the literature. This paper proposes a dedicated design method for energy-aware software, discusses a concrete realization of this method, and—by means of a concrete example—illustrates the suitability of this method in achieving modularity

    Inferring Concise Specifications of APIs

    Get PDF
    Modern software relies on libraries and uses them via application programming interfaces (APIs). Correct API usage as well as many software engineering tasks are enabled when APIs have formal specifications. In this work, we analyze the implementation of each method in an API to infer a formal postcondition. Conventional wisdom is that, if one has preconditions, then one can use the strongest postcondition predicate transformer (SP) to infer postconditions. However, SP yields postconditions that are exponentially large, which makes them difficult to use, either by humans or by tools. Our key idea is an algorithm that converts such exponentially large specifications into a form that is more concise and thus more usable. This is done by leveraging the structure of the specifications that result from the use of SP. We applied our technique to infer postconditions for over 2,300 methods in seven popular Java libraries. Our technique was able to infer specifications for 75.7% of these methods, each of which was verified using an Extended Static Checker. We also found that 84.6% of resulting specifications were less than 1/4 page (20 lines) in length. Our technique was able to reduce the length of SMT proofs needed for verifying implementations by 76.7% and reduced prover execution time by 26.7%

    Optimizing for confidence - Costs and opportunities at the frontier between abstraction and reality

    Full text link
    Is there a relationship between computing costs and the confidence people place in the behavior of computing systems? What are the tuning knobs one can use to optimize systems for human confidence instead of correctness in purely abstract models? This report explores these questions by reviewing the mechanisms by which people build confidence in the match between the physical world behavior of machines and their abstract intuition of this behavior according to models or programming language semantics. We highlight in particular that a bottom-up approach relies on arbitrary trust in the accuracy of I/O devices, and that there exists clear cost trade-offs in the use of I/O devices in computing systems. We also show various methods which alleviate the need to trust I/O devices arbitrarily and instead build confidence incrementally "from the outside" by considering systems as black box entities. We highlight cases where these approaches can reach a given confidence level at a lower cost than bottom-up approaches.Comment: 11 pages, 1 figur

    Fast and Space-Efficient Queues via Relaxation

    Get PDF
    Efficient message-passing implementations of shared data types are a vital component of practical distributed systems, enabling them to work on shared data in predictable ways, but there is a long history of results showing that many of the most useful types of access to shared data are necessarily slow. A variety of approaches attempt to circumvent these bounds, notably weakening consistency guarantees and relaxing the sequential specification of the provided data type. These trade behavioral guarantees for performance. We focus on relaxing the sequential specification of a first-in, first-out queue type, which has been shown to allow faster linearizable implementations than are possible for traditional FIFO queues without relaxation. The algorithms which showed these improvements in operation time tracked a complete execution history, storing complete object state at all n processes in the system, leading to n copies of every stored data element. In this paper, we consider the question of reducing the space complexity of linearizable implementations of shared data types, which provide intuitive behavior through strong consistency guarantees. We improve the existing algorithm for a relaxed queue, showing that it is possible to store only one copy of each element in a shared queue, while still having a low amortized time cost. This is one of several important steps towards making these data types practical in real world systems

    Model-Based Testing of Safety Critical Real-Time Control Logic Software

    Full text link
    The paper presents the experience of the authors in model based testing of safety critical real-time control logic software. It describes specifics of the corresponding industrial settings and discusses technical details of usage of UniTESK model based testing technology in these settings. Finally, we discuss possible future directions of safety critical software development processes and a place of model based testing techniques in it.Comment: In Proceedings MBT 2012, arXiv:1202.582
    corecore