57 research outputs found

    Rampo: A CEGAR-based Integration of Binary Code Analysis and System Falsification for Cyber-Kinetic Vulnerability Detection

    Full text link
    This paper presents a novel tool, named Rampo, that can perform binary code analysis to identify cyber kinetic vulnerabilities in CPS. The tool takes as input a Signal Temporal Logic (STL) formula that describes the kinetic effect, i.e., the behavior of the physical system, that one wants to avoid. The tool then searches the possible cyber trajectories in the binary code that may lead to such physical behavior. This search integrates binary code analysis tools and hybrid systems falsification tools using a Counter-Example Guided Abstraction Refinement (CEGAR) approach. Rampo starts by analyzing the binary code to extract symbolic constraints that represent the different paths in the code. These symbolic constraints are then passed to a Satisfiability Modulo Theories (SMT) solver to extract the range of control signals that can be produced by each path in the code. The next step is to search over possible physical trajectories using a hybrid systems falsification tool that adheres to the behavior of the cyber paths and yet leads to violations of the STL formula. Since the number of cyber paths that need to be explored increases exponentially with the length of physical trajectories, we iteratively perform refinement of the cyber path constraints based on the previous falsification result and traverse the abstract path tree obtained from the control program to explore the search space of the system. To illustrate the practical utility of binary code analysis in identifying cyber kinetic vulnerabilities, we present case studies from diverse CPS domains, showcasing how they can be discovered in their control programs. Our tool could compute the same number of vulnerabilities while leading to a speedup that ranges from 3x to 98x

    Wiring Circuits Is Easy as {0,1,ω}, or Is It...

    Get PDF
    Quantitative Type-Systems support fine-grained reasoning about term usage in our programming languages. Hardware Design Languages are another style of language in which quantitative typing would be beneficial. When wiring components together we must ensure that there are no unused ports, dangling wires, or accidental fan-ins and fan-outs. Although many wire usage checks are detectable using static analysis tools, such as Verilator, quantitative typing supports making these extrinsic checks an intrinsic aspect of the type-system. With quantitative typing of bound terms, we can provide design-time checks that all wires and ports have been used, and ensure that all wiring decisions are explicitly made, and are neither implicit nor accidental. We showcase the use of quantitative types in hardware design languages by detailing how we can retrofit quantitative types onto SystemVerilog netlists, and the impact that such a quantitative type-system has when creating designs. Netlists are gate-level descriptions of hardware that are produced as the result of synthesis, and it is from these netlists that hardware is generated (fabless or fabbed). First, we present a simple structural type-system for a featherweight version of SystemVerilog netlists that demonstrates how we can type netlists using standard structural techniques, and what it means for netlists to be type-safe but still lead to ill-wired designs. We then detail how to retrofit the language with quantitative types, make the type-system sub-structural, and detail how our new type-safety result ensures that wires and ports are used once. Our ideas have been proven both practically and formally by realising our work in Idris2, through which we can construct a verified language implementation that can type-check existing designs. From this work we can look to promote quantitative typing back up the synthesis chain to a more comprehensive hardware description language; and to help develop new and better hardware description languages with quantitative typing

    Debugging and Verification Tools for LINGUA FRANCA in GEMOC Studio

    Get PDF
    International audienceLINGUA FRANCA (LF) is a polyglot coordination language designed for the composition of concurrent, timesensitive, and potentially distributed reactive components called reactors. The LF coordination layer facilitates the use of target languages (e.g., C, C++, Python, TypeScript) to realize the program logic, where each target language requires a separate runtime implementation that must correctly implement the reactor semantics. Verifying the correctness of runtime implementations is not a trivial task, and is currently done on the basis of regression testing. To provide a more formal verification tool for existing and future target runtimes, as well as to help verify properties of LF programs, we recruit the use of GEMOC Studio-an Eclipse-based workbench for the development, integration, and use of heterogeneous executable modeling languages. We present an operational model for LF, realized in GEMOC Studio, that is primed to interact with a rich set of analysis and verification tools. Our instrumentation provides the ability to navigate the execution of LF programs using an omniscient debugger with graphical model animation; to check assertions in particular execution runs, or exhaustively, using a model checker; and to validate or debug traces obtained from arbitrary LF runtime environments

    FPGA-SPICE: A Simulation-Based Architecture Evaluation Framework for FPGAs

    Get PDF
    In this paper, we developed a simulation-based architecture evaluation framework for field-programmable gate arrays (FPGAs), called FPGA-SPICE, which enables automatic layout-level estimation and electrical simulations of FPGA architectures. FPGA-SPICE can automatically generate Verilog and SPICE netlists based on realistic FPGA configurations and a high-level eTtensible Markup Language-based FPGA architectural description language. The outputted Verilog netlists can be used to generate layouts of full FPGA fabrics through a semicustom design flow. SPICE simulation decks can be generated at three levels of complexity, namely, full-chip-level, grid-level, and component-level, providing different tradeoff between accuracy and simulation time. In order to enable such level of analysis, we presented two SPICE netlist partitioning techniques: loads extraction and parasitic net activity estimation. Electrical simulations showed that averaged over the selected benchmarks, the grid-/component-level approach can achieve 6.1x/7.5x execution speed-up with 9.9%/8.3% accuracy loss, respectively, compared to the full-chip level simulation. FPGA-SPICE was showcased through three different case studies: 1) an area breakdown analysis for static random access memory-based FPGAs, showing that configuration memories are a dominant factor; 2) a power breakdown comparison to analytical models, analyzing the source of accuracy loss; and 3) a robustness evaluation against process corners, studying their impact on energy consumption of full FPGA fabrics

    Modélisation à haut niveau d'abstraction pour les systèmes embarqués

    No full text
    Modern embedded systems have reached a level of complexity such that it is no longer possible to wait for the first physical prototypes to validate choices on the integration of hardware and software components. It is necessary to use models, early in the design flow. The work presented in this document contribute to the state of the art in several domains. First, we present some verification techniques based on abstract interpretation and SMT-solving for programs written in general-purpose languages like C, C++ or Java. Then, we use verification tools on models written in SystemC at the transaction level (TLM). Several approaches are presented, most of them using compilation techniques specific to SystemC to turn the models into a format usable by existing tools. The second part of the document deal with non-functional properties of models: timing performances, power consumption and temperature. In the context of TLM, we show how functional models can be enriched with non-functional information. Finally, we present contributions to the modular performance analysis (MPA) with real-time calculus (RTC) framework. We describe several ways to connect RTC to more expressive formalisms like timed automata and the synchronous language Lustre. These connections raise the problem of causality, which is defined formally and solved with the new causality closure algorithm.Les systèmes embarqués modernes ont atteint un niveau de complexité qui fait qu'il n'est plus possible d'attendre les premiers prototypes physiques pour valider les décisions sur l'intégration des composants matériels et logiciels. Il est donc nécessaire d'utiliser des modèles, tôt dans le flot de conception. Les travaux présentés dans ce document contribuent à l'état de l'art dans plusieurs domaines. Nous présentons dans un premier temps de nouvelles techniques de vérification de programmes écrits dans des langages généralistes comme C, C++ ou Java. Dans un second temps, nous utilisons des outils de vérification formelle sur des modèles écrits en SystemC au niveau transaction (TLM). Plusieurs approches sont présentées, la plupart d'entre elles utilisent des techniques de compilations spécifiques à SystemC pour transformer le programme SystemC en un format utilisable par les outils. La seconde partie du document s'intéresse aux propriétés non-fonctionnelles des modèles~: performances temporelles, consommation électrique et température. Dans le contexte de la modélisation TLM, nous proposons plusieurs techniques pour enrichir des modèles fonctionnels avec des informations non-fonctionnelles. Enfin, nous présentons les contributions faites à l'analyse de performance modulaire (MPA) avec le calcul temps-réel (RTC). Nous proposons plusieurs connections entre ces modèles analytiques et des formalismes plus expressifs comme les automates temporisés et le langage de programmation Lustre. Ces connexion posent le problème théorique de la causalité, qui est formellement défini et résolu avec un algorithme nouveau dit de " fermeture causale "

    Automata-Based Software Model Checking of Hyperproperties

    Full text link
    We develop model checking algorithms for Temporal Stream Logic (TSL) and Hyper Temporal Stream Logic (HyperTSL) modulo theories. TSL extends Linear Temporal Logic (LTL) with memory cells, functions and predicates, making it a convenient and expressive logic to reason over software and other systems with infinite data domains. HyperTSL further extends TSL to the specification of hyperproperties - properties that relate multiple system executions. As such, HyperTSL can express information flow policies like noninterference in software systems. We augment HyperTSL with theories, resulting in HyperTSL(T),and build on methods from LTL software verification to obtain model checking algorithms for TSL and HyperTSL(T). This results in a sound but necessarily incomplete algorithm for specifications contained in the forall*exists* fragment of HyperTSL(T). Our approach constitutes the first software model checking algorithm for temporal hyperproperties with quantifier alternations that does not rely on a finite-state abstraction

    A Survey of Challenges for Runtime Verification from Advanced Application Domains (Beyond Software)

    Get PDF
    Runtime verification is an area of formal methods that studies the dynamic analysis of execution traces against formal specifications. Typically, the two main activities in runtime verification efforts are the process of creating monitors from specifications, and the algorithms for the evaluation of traces against the generated monitors. Other activities involve the instrumentation of the system to generate the trace and the communication between the system under analysis and the monitor. Most of the applications in runtime verification have been focused on the dynamic analysis of software, even though there are many more potential applications to other computational devices and target systems. In this paper we present a collection of challenges for runtime verification extracted from concrete application domains, focusing on the difficulties that must be overcome to tackle these specific challenges. The computational models that characterize these domains require to devise new techniques beyond the current state of the art in runtime verification
    • …
    corecore