17 research outputs found

    Optimizing Feature Interaction Detection

    Get PDF
    © 2017, Springer International Publishing AG. The feature interaction problem has been recognized as a general problem of software engineering. The problem appears when a combination of features interacts generating a conflict, exhibiting a behaviour that is unexpected for the features considered in isolation, possibly resulting in some critical safety violation. Verification of absence of critical feature interactions has been the subject of several studies. In this paper, we focus on functional interactions and we address the problem of the 3-way feature interactions, i.e. interactions that occur only when three features are all included in the system, but not when only two of them are. In this setting, we define a widely applicable definition framework, within which we show that a 3 (or greater)-way interaction is always caused by a 2-way interaction, i.e. that pairwise sampling is complete, hence reducing to quadratic the complexity of automatic detection of incorrect interaction

    Design and Validation of Cyber-Physical Systems Through Co-Simulation: The Voronoi Tessellation Use Case

    Get PDF
    This paper reports on the use of co-simulation techniques to build prototypes of co-operative autonomous robotic cyber-physical systems. Designing such systems involves a mission-specific planner algorithm, a control algorithm to drive an agent performing its task; and the plant model to simulate the agent dynamics. An application aimed at positioning a swarm of unmanned aerial vehicles (drones) in a bounded area, exploiting a Voronoi tessellation algorithm developed in this work, is taken as a case study. The paper shows how co-simulation allows testing the complex system at the design phase using models created with different languages and tools. The paper then reports on how the adopted co-simulation platform enables control parameters calibration, by exploiting design space exploration technology. The INTO-CPS co-simulation platform, compliant with the Functional Mock-up Interface standard to exchange dynamic simulation models using various languages, was used in this work. The different software modules were written in Modelica, C, and Python. In particular, the latter was used to implement an original variant of the Voronoi algorithm to tesselate a convex polygonal region, by means of dummy points added at appropriate positions outside the bounding polygon. A key contribution of this case study is that it demonstrates how an accurate simulation of a cooperative drone swarm requires modeling the physical plant together with the high-level coordination algorithm. The coupling of co-simulation and design space exploration has been demonstrated to support control parameter calibration to optimize energy consumption and convergence time to the target positions of the drone swarm. From a practical point of view, this makes it possible to test the ability of the swarm to self-deploy in space in order to achieve optimal detection coverage and allow unmanned aerial vehicles in a swarm to coordinate with each other

    Formal Analysis of a Fault-Tolerant Routing Algorithm for a Network-on-Chip

    Get PDF
    International audienceA fault-tolerant routing algorithm in Network-on-Chip architectures provides adaptivity for on-chip communications. Adding fault-tolerance adaptivity to a routing algorithm increases its design complexity and makes it prone to deadlock and other problems if improperly implemented. Formal verification techniques are needed to check the correctness of the design. This paper performs formal analysis on an extension of the link-fault tolerant Network-on-Chip architecture introduced by Wu et al. that supports multiflit wormhole routing. This paper describes several lessons learned during the process of constructing a formal model of this routing architecture. Finally, this paper presents how the deadlock freedom and tolerance to a single-link fault is verified for a two-by-two mesh version of this routing architecture

    Formal methods for functional verification of cache-coherent systems-on-chip

    Get PDF
    State-of-the-art System-on-Chip (SoC) architectures integrate many different components, such as processors, accelerators, memories, and I/O blocks. Some of those components, but not all, may have caches. Because the effort of validation with simulation-based techniques, currently used in industry, grows exponentially with the complexity of the SoC, this thesis investigates the use of formal verification techniques in this context. More precisely, we use the CADP toolbox to develop and validate a generic formal model of a heterogeneous cache-coherent SoC compliant with the recent AMBA 4 ACE specification proposed by ARM. We use a constraint-oriented specification style to model the general requirements of the specification. We verify system properties on both the constrained and unconstrained model to detect the cache coherency corner cases. We take advantage of the parametrization of the proposed model to produce a comprehensive set of counterexamples of non-satisfied properties in the unconstrained model. The results of formal verification are then used to improve the industrial simulation-based verification techniques in two aspects. On the one hand, we suggest using the formal model to assess the sanity of an interface verification unit. On the other hand, in order to generate clever semi-directed test cases from temporal logic properties, we propose a two-step approach. One step consists in generating system-level abstract test cases using model-based testing tools of the CADP toolbox. The other step consists in refining those tests into interface-level concrete test cases that can be executed at RTL level with a commercial Coverage-Directed Test Generation tool. We found that our approach helps in the transition between interface-level and system-level verification, facilitates the validation of system-level properties, and enables early detection of bugs in both the SoC and the commercial test-bench.Les architectures des systèmes sur puce (System-on-Chip, SoC) actuelles intègrent de nombreux composants différents tels que les processeurs, les accélérateurs, les mémoires et les blocs d'entrée/sortie, certains pouvant contenir des caches. Vu que l'effort de validation basée sur la simulation, actuellement utilisée dans l'industrie, croît de façon exponentielle avec la complexité des SoCs, nous nous intéressons à des techniques de vérification formelle. Nous utilisons la boîte à outils CADP pour développer et valider un modèle formel d'un SoC générique conforme à la spécification AMBA 4 ACE récemment proposée par ARM dans le but de mettre en œuvre la cohérence de cache au niveau système. Nous utilisons une spécification orientée contraintes pour modéliser les exigences générales de cette spécification. Les propriétés du système sont vérifié à la fois sur le modèle avec contraintes et le modèle sans contraintes pour détecter les cas intéressants pour la cohérence de cache. La paramétrisation du modèle proposé a permis de produire l'ensemble complet des contre-exemples qui ne satisfont pas une certaine propriété dans le modèle non contraint. Notre approche améliore les techniques industrielles de vérification basées sur la simulation en deux aspects. D'une part, nous suggérons l'utilisation du modèle formel pour évaluer la bonne construction d'une unité de vérification d'interface. D'autre part, dans l'objectif de générer des cas de test semi-dirigés intelligents à partir des propriétés de logique temporelle, nous proposons une approche en deux étapes. La première étape consiste à générer des cas de tests abstraits au niveau système en utilisant des outils de test basé sur modèle de la boîte à outils CADP. La seconde étape consiste à affiner ces tests en cas de tests concrets au niveau de l'interface qui peuvent être exécutés en RTL grâce aux services d'un outil commercial de génération de tests dirigés par les mesures de couverture. Nous avons constaté que notre approche participe dans la transition entre la vérification du niveau interface, classiquement pratiquée dans l'industrie du matériel, et la vérification au niveau système. Notre approche facilite aussi la validation des propriétés globales du système, et permet une détection précoce des bugs, tant dans le SoC que dans les bancs de test commerciales

    Doctor of Philosophy

    Get PDF
    dissertationOver the last decade, cyber-physical systems (CPSs) have seen significant applications in many safety-critical areas, such as autonomous automotive systems, automatic pilot avionics, wireless sensor networks, etc. A Cps uses networked embedded computers to monitor and control physical processes. The motivating example for this dissertation is the use of fault- tolerant routing protocol for a Network-on-Chip (NoC) architecture that connects electronic control units (Ecus) to regulate sensors and actuators in a vehicle. With a network allowing Ecus to communicate with each other, it is possible for them to share processing power to improve performance. In addition, networked Ecus enable flexible mapping to physical processes (e.g., sensors, actuators), which increases resilience to Ecu failures by reassigning physical processes to spare Ecus. For the on-chip routing protocol, the ability to tolerate network faults is important for hardware reconfiguration to maintain the normal operation of a system. Adding a fault-tolerance feature in a routing protocol, however, increases its design complexity, making it prone to many functional problems. Formal verification techniques are therefore needed to verify its correctness. This dissertation proposes a link-fault-tolerant, multiflit wormhole routing algorithm, and its formal modeling and verification using two different methodologies. An improvement upon the previously published fault-tolerant routing algorithm, a link-fault routing algorithm is proposed to relax the unrealistic node-fault assumptions of these algorithms, while avoiding deadlock conservatively by appropriately dropping network packets. This routing algorithm, together with its routing architecture, is then modeled in a process-algebra language LNT, and compositional verification techniques are used to verify its key functional properties. As a comparison, it is modeled using channel-level VHDL which is compiled to labeled Petri-nets (LPNs). Algorithms for a partial order reduction method on LPNs are given. An optimal result is obtained from heuristics that trace back on LPNs to find causally related enabled predecessor transitions. Key observations are made from the comparison between these two verification methodologies

    Model-based quality assurance of instrumented context-free systems

    Get PDF
    The ever-growing complexity of today’s software and hardware systems makes quality assurance (QA) a challenging task. Abstraction is a key technique for dealing with this complexity because it allows one to skip non-essential properties of a system and focus on the important ones. Crucial for the success of this approach is the availability of adequate abstraction models that strike a fine balance between simplicity and expressiveness. This thesis presents the formalisms of systems of procedural automata (SPAs), systems of behavioral automata (SBAs), and systems of procedural Mealy machines (SPMMs). The three model types describe systems which consist of multiple procedures that can mutually call each other, including recursion. While the individual procedures are described by regular automata and therefore are easy to understand, the aggregation of procedures towards systems captures the semantics of context-free systems, offering the expressiveness necessary for representing procedural systems. A central concept of the proposed model types is an instrumentation that exposes the internal structure of systems by making calls to and returns from procedures observable. This instrumentation allows for a notion of rigorous (de-) composition which enables a translation between local (procedural) views and global (holistic) views on a system. On the basis of this translation, this thesis presents algorithms for the verification, testing, and learning of (instrumented) context-free systems, covering a broad spectrum of practical QA tasks. Starting with SPAs as a “base” formalism for context-free systems, the flexibility of this concept is shown by including features such as prefix-closure (SBAs) and dialog-based transductions (SPMMs). In a comparison with related formalisms, this thesis shows that the simplicity of the proposed model types not only increases the understandability of models but can also improve the performance of QA tasks. This makes SPAs, SBAs, and SPMMs a powerful tool for tackling the practical challenges of assuring the quality of today’s software and hardware systems

    Modeling and Analysis of Automotive Cyber-physical Systems: Formal Approaches to Latency Analysis in Practice

    Get PDF
    Based on advances in scheduling analysis in the 1970s, a whole area of research has evolved: formal end-to-end latency analysis in real-time systems. Although multiple approaches from the scientific community have successfully been applied in industrial practice, a gap is emerging between the means provided by formally backed approaches and the need of the automotive industry where cyber-physical systems have taken over from classic embedded systems. They are accompanied by a shift to heterogeneous platforms build upon multicore architectures. Scien- tific techniques are often still based on too simple system models and estimations on important end-to-end latencies have only been tightened recently. To this end, we present an expressive system model and formally describe the problem of end-to-end latency analysis in modern automotive cyber-physical systems. Based on this we examine approaches to formally estimate tight end-to-end latencies in Chapter 4 and Chapter 5. The de- veloped approaches include a wide range of relevant systems. We show that our approach for the estimation of latencies of task chains dominates existing approaches in terms of tightness of the results. In the last chapter we make a brief digression to measurement analysis since measuring and simulation is an important part of verification in current industrial practice
    corecore