19 research outputs found

    A new dialect of SOFL-Syntax formal semantics and tool support

    Get PDF
    Structured Object Orientated Formal Language (SOFL) is a formal method design methodology that combines data flows diagrams and predicates in order to describe processes that can be refined. This methodology creates a very versatile method of describing a system, which system properties can be proven rigorously. Data flows are grouped by ports that define from which data flows data can be consumed or on which flows data can be generated. For predicates, Logic of Partial Functions (LFP) are used; and an undefined element that is also used to indicate if a data flows do not contain any data. Over time SOFL “evolved organically” and a number of features were added: usability was the main consideration for a feature being added. For a formal language to be useful there must be no uncertainty of a specific design’s meaning. With SOFL, there is a possible contradiction between the requirement that a process's precondition must be true when the process fire, and the fire rules. This contradiction is due to the use of LPF. Semantics (the meaning) of SOFL was not always updated to keep track of the changes made to SOFL which resulted in an outdated and incomplete semantic. The incompleteness of the semantics is a significant factor motivating the work done in this dissertation. In this dissertation, a dialect of SOFL is created to define a semantic. Not all the elements of SOFL are added in order that a simpler semantic can be defined. Elements that were removed include: LPF, Classes, and Non-deterministic broadcast nodes. Semantics of the dialect is created by a two-step process: firstly, an intuitive understanding of the dialect is created, and secondly, both static and dynamic semantics are defined by means of translations. A translation is a mapping from the dialect to a formal language that describes a certain aspect of the dialect. Static semantics defines the meaning of the elements that are “fixed” in their state: SMT-LIB is used as the target language to describe the static semantics of the dialect. Dynamic semantics describes how an element in a design changes over time: the process algebra mCRL2 is used as the formal language which describes the dynamic behaviour of the dialect. The SMT-Solver Z3 and tools included in mCLR2 are used to analyse the translation of the dialect. Use of these tools allows properties that are necessary for a design to have a well defined meaning, to be proven. Properties that can be proven include: a process can fire, a process can fire an infinite number of times, and a predicate that described a property. An Eclipse plug-in is created so that translation is not required to be done manually. After a design is translated the tools Z3 and mCRL2 are run using script files and the results of the analysis are displayed on the screen. The desired properties could be proven but for a moderate size design, but as the size of the design increased the analysis of the translation could not be completed due to computational problem. Usability of the tool can be improved by not only using a textual representation of a design, but also visual representations as in SOFL. As a result, properties that are necessary for a design to have a well-defined meaning, can be proven using these tools.Dissertation (MSc)--University of Pretoria, 2018.Computer ScienceMScUnrestricte

    An overview of fault tree analysis and its application in model based dependability analysis

    Get PDF
    YesFault Tree Analysis (FTA) is a well-established and well-understood technique, widely used for dependability evaluation of a wide range of systems. Although many extensions of fault trees have been proposed, they suffer from a variety of shortcomings. In particular, even where software tool support exists, these analyses require a lot of manual effort. Over the past two decades, research has focused on simplifying dependability analysis by looking at how we can synthesise dependability information from system models automatically. This has led to the field of model-based dependability analysis (MBDA). Different tools and techniques have been developed as part of MBDA to automate the generation of dependability analysis artefacts such as fault trees. Firstly, this paper reviews the standard fault tree with its limitations. Secondly, different extensions of standard fault trees are reviewed. Thirdly, this paper reviews a number of prominent MBDA techniques where fault trees are used as a means for system dependability analysis and provides an insight into their working mechanism, applicability, strengths and challenges. Finally, the future outlook for MBDA is outlined, which includes the prospect of developing expert and intelligent systems for dependability analysis of complex open systems under the conditions of uncertainty

    Practical Use of High-level Petri Nets

    Full text link

    Large-Scale Integration of Heterogeneous Simulations

    Get PDF

    SdrLift: A Domain-Specific Intermediate Hardware Synthesis Framework for Prototyping Software-Defined Radios

    Get PDF
    Modern design of Software-Defined Radio (SDR) applications is based on Field Programmable Gate Arrays (FPGA) due to their ability to be configured into solution architectures that are well suited to domain-specific problems while achieving the best trade-off between performance, power, area, and flexibility. FPGAs are well known for rich computational resources, which traditionally include logic, register, and routing resources. The increased technological advances have seen FPGAs incorporating more complex components that comprise sophisticated memory blocks, Digital Signal Processing (DSP) blocks, and high-speed interfacing to Gigabit Ethernet (GbE) and Peripheral Component Interconnect Express (PCIe) bus. Gateware for programming FPGAs is described at a lowlevel of design abstraction using Register Transfer Language (RTL), typically using either VHSIC-HDL (VHDL) or Verilog code. In practice, the low-level description languages have a very steep learning curve, provide low productivity for hardware designers and lack readily available open-source library support for fundamental designs, and consequently limit the design to only hardware experts. These limitations have led to the adoption of High-Level Synthesis (HLS) tools that raise design abstraction using syntax, semantics, and software development notations that are well-known to most software developers. However, while HLS has made programming of FPGAs more accessible and can increase the productivity of design, they are still not widely adopted in the design community due to the low-level skills that are still required to produce efficient designs. Additionally, the resultant RTL code from HLS tools is often difficult to decipher, modify and optimize due to the functionality and micro-architecture that are coupled together in a single High-Level Language (HLL). In order to alleviate these problems, Domain-Specific Languages (DSL) have been introduced to capture algorithms at a high level of abstraction with more expressive power and providing domain-specific optimizations that factor in new transformations and the trade-off between resource utilization and system performance. The problem of existing DSLs is that they are designed around imperative languages with an instruction sequence that does not match the hardware structure and intrinsics, leading to hardware designs with system properties that are unconformable to the high-level specifications and constraints. The aim of this thesis is, therefore, to design and implement an intermediatelevel framework namely SdrLift for use in high-level rapid prototyping of SDR applications that are based on an FPGA. The SdrLift input is a HLL developed using functional language constructs and design patterns that specify the structural behavior of the application design. The functionality of the SdrLift language is two-fold, first, it can be used directly by a designer to develop the SDR applications, secondly, it can be used as the Intermediate Representation (IR) step that is generated by a higher-level language or a DSL. The SdrLift compiler uses the dataflow graph as an IR to structurally represent the accelerator micro-architecture in which the components correspond to the fine-level and coarse-level Hardware blocks (HW Block) which are either auto-synthesized or integrated from existing reusable Intellectual Property (IP) core libraries. Another IR is in the form of a dataflow model and it is used for composition and global interconnection of the HW Blocks while making efficient interfacing decisions in an attempt to satisfy speed and resource usage objectives. Moreover, the dataflow model provides rules and properties that will be used to provide a theoretical framework that formally analyzes the characteristics of SDR applications (i.e. the throughput, sample rate, latency, and buffer size among other factors). Using both the directed graph flow (DFG) and the dataflow model in the SdrLift compiler provides two benefits: an abstraction of the microarchitecture from the high-level algorithm specifications and also decoupling of the microarchitecture from the low-level RTL implementation. Following the IR creation and model analyses is the VHDL code generation which employs the low-level optimizations that ensure optimal hardware design results. The code generation process per forms analysis to ensure the resultant hardware system conforms to the high-level design specifications and constraints. SdrLift is evaluated by developing representative SDR case studies, in which the VHDL code for eight different SDR applications is generated. The experimental results show that SdrLift achieves the desired performance and flexibility, while also conserving the hardware resources utilized

    Intensional Cyberforensics

    Get PDF
    This work focuses on the application of intensional logic to cyberforensic analysis and its benefits and difficulties are compared with the finite-state-automata approach. This work extends the use of the intensional programming paradigm to the modeling and implementation of a cyberforensics investigation process with backtracing of event reconstruction, in which evidence is modeled by multidimensional hierarchical contexts, and proofs or disproofs of claims are undertaken in an eductive manner of evaluation. This approach is a practical, context-aware improvement over the finite state automata (FSA) approach we have seen in previous work. As a base implementation language model, we use in this approach a new dialect of the Lucid programming language, called Forensic Lucid, and we focus on defining hierarchical contexts based on intensional logic for the distributed evaluation of cyberforensic expressions. We also augment the work with credibility factors surrounding digital evidence and witness accounts, which have not been previously modeled. The Forensic Lucid programming language, used for this intensional cyberforensic analysis, formally presented through its syntax and operational semantics. In large part, the language is based on its predecessor and codecessor Lucid dialects, such as GIPL, Indexical Lucid, Lucx, Objective Lucid, and JOOIP bound by the underlying intensional programming paradigm.Comment: 412 pages, 94 figures, 18 tables, 19 algorithms and listings; PhD thesis; v2 corrects some typos and refs; also available on Spectrum at http://spectrum.library.concordia.ca/977460

    Modeling and checking Real-Time system designs

    Get PDF
    Real-time systems are found in an increasing variety of application fields. Usually, they are embedded systems controlling devices that may risk lives or damage properties: they are safety critical systems. Hard Real-Time requirements (late means wrong) make the development of such kind of systems a formidable and daunting task. The need to predict temporal behavior of critical real-time systems has encouraged the development of an useful collection of models, results and tools for analyzing schedulability of applications (e.g., [log]). However, there is no general analytical support for verifying other kind of high level timing requirements on complex software architectures. On the other hand, the verification of specifications and designs of real-time systems has been considered an interesting application field for automatic analysis techniques such as model-checking. Unfortunately, there is a natural trade-off between sophistication of supported features and the practicality of formal analysis. To cope with the challenges of formal analysis real-time system designs we focus on three aspects that, we believe, are fundamental to get practical tools: model-generation, modelreduction and model-checking. Then, firstly, we extend our ideas presented in [30] and develop an automatic approach to model and verify designs of real-time systems for complex timing requirements based on scheduling theory and timed automata theory [7] (a wellknown and studied formalism to model and verify timed systems). That is, to enhance practicality of formal analysis, we focus our analysis on designs adhering to Fixed-Priority scheduling. In essence, we exploit known scheduling theory to automatically derive simple and compositional formal models. To the best of our knowledge, this is the first proposal to integrate scheduling theory into the framework of automatic formal verification. To model such systems, we present I/O Timed Components, a notion and discipline to build non-blocking live timed systems. I/O Timed Components, which are build on top of Timed Automata, provide other important methodological advantages like influence detection or compositional reasoning. Secondly, we provide a battery of automatic and rather generic abstraction techniques that, given a requirement to be analyzed, reduces the model while preserving the relevant behaviors to check it. Thus, we do not feed the verification tools with the whole model as previous formal approaches. To provide arguments about the correctness of those abstractions, we present a notion of Continuous Observational Bismulation that is weaker than strong timed bisimulation yet preserving many well-known logics for timed systems like TCTL [3]. Finally, since we choose timed automata as formal kernel, we adapt and apply their deeply studied and developed analysis theory, as well as their practical tools. Moreover, we also describe from scratch an algorithm to model-check duration properties, a feature that is not addressed by available tools. That algorithm extends the one presented in [28].Fil:Braberman, Víctor Adrián. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales; Argentina
    corecore