514,060 research outputs found

    An Adaptive Design Methodology for Reduction of Product Development Risk

    Full text link
    Embedded systems interaction with environment inherently complicates understanding of requirements and their correct implementation. However, product uncertainty is highest during early stages of development. Design verification is an essential step in the development of any system, especially for Embedded System. This paper introduces a novel adaptive design methodology, which incorporates step-wise prototyping and verification. With each adaptive step product-realization level is enhanced while decreasing the level of product uncertainty, thereby reducing the overall costs. The back-bone of this frame-work is the development of Domain Specific Operational (DOP) Model and the associated Verification Instrumentation for Test and Evaluation, developed based on the DOP model. Together they generate functionally valid test-sequence for carrying out prototype evaluation. With the help of a case study 'Multimode Detection Subsystem' the application of this method is sketched. The design methodologies can be compared by defining and computing a generic performance criterion like Average design-cycle Risk. For the case study, by computing Average design-cycle Risk, it is shown that the adaptive method reduces the product development risk for a small increase in the total design cycle time.Comment: 21 pages, 9 figure

    Implementing Multi-Periodic Critical Systems: from Design to Code Generation

    Full text link
    This article presents a complete scheme for the development of Critical Embedded Systems with Multiple Real-Time Constraints. The system is programmed with a language that extends the synchronous approach with high-level real-time primitives. It enables to assemble in a modular and hierarchical manner several locally mono-periodic synchronous systems into a globally multi-periodic synchronous system. It also allows to specify flow latency constraints. A program is translated into a set of real-time tasks. The generated code (\C\ code) can be executed on a simple real-time platform with a dynamic-priority scheduler (EDF). The compilation process (each algorithm of the process, not the compiler itself) is formally proved correct, meaning that the generated code respects the real-time semantics of the original program (respect of periods, deadlines, release dates and precedences) as well as its functional semantics (respect of variable consumption).Comment: 15 pages, published in Workshop on Formal Methods for Aerospace (FMA'09), part of Formal Methods Week 2009

    Valid extensions of introspective systems: a foundation for reflective theorem provers

    Get PDF
    Introspective systems have been proved ueful in several applications, especially in the area of automated reasoning. In this paper we propose to use structured algebraic specifications to describe the embedded account of introspective systems. Our main result is that extending such an introspective system in a valid manner can be reduced to development of correct software. Since sound extension of automated reasoning systems again can be reduced to valid extension of introspective systems, our work can be seen as a foundation for extensible introspective reasoning systems, and in particular for reflective provers. We prove correctness of our mechanism and report on first experiences we have made with its realization in the KIV system (Karlsruhe Interactive Verifier)

    Construction of formal models and verifying property specifications through an example of railway interlocking systems

    Get PDF
    Abstract The use of formal modeling has seen an increasing interest in the development of safety-critical, embedded microcomputer-controlled railway interlocking systems, due to its ability to specify the behavior of the systems using mathematically precise rules. The research goal is to prepare a specification-verification environment, which supports the developer of the railway interlocking systems in the creation of a formally-proven correct design and at the same time hides the inherent mathematical-computer since related background knowledge. The case study is presented with the aim to summarize the process of formalizing a domain specification, and to show further application possibilities (e.g. verification methods)

    Exploring and Characterizing Large Language Models For Embedded System Development and Debugging

    Full text link
    Large language models (LLMs) have shown remarkable abilities to generate code, however their ability to develop software for embedded systems, which requires cross-domain knowledge of hardware and software has not been studied. In this paper we systematically evaluate leading LLMs (GPT-3.5, GPT-4, PaLM 2) to assess their performance for embedded system development, study how human programmers interact with these tools, and develop an AI-based software engineering workflow for building embedded systems. We develop an an end-to-end hardware-in-the-loop evaluation platform for verifying LLM generated programs using sensor actuator pairs. We compare all three models with N=450 experiments and find surprisingly that GPT-4 especially shows an exceptional level of cross-domain understanding and reasoning, in some cases generating fully correct programs from a single prompt. In N=50 trials, GPT-4 produces functional I2C interfaces 66% of the time. GPT-4 also produces register-level drivers, code for LoRa communication, and context-specific power optimizations for an nRF52 program resulting in over 740x current reduction to 12.2 uA. We also characterize the models' limitations to develop a generalizable workflow for using LLMs in embedded system development. We evaluate the workflow with 15 users including novice and expert programmers. We find that our workflow improves productivity for all users and increases the success rate for building a LoRa environmental sensor from 25% to 100%, including for users with zero hardware or C/C++ experience

    Algorithmic analysis and hardware implementation of a two-wire-interface communication analyser

    Get PDF
    This paper discusses the development of an algorithm for the data analysis to monitor Two-Wire-Interface operation in order to improve the reliability of communication. This algorithm is designed to improve code-efficiency with regards to hardware modelling. An algorithm for the protocol used in the Standard-Mode, Fast-Mode, Fast-Mode Plus and High-Speed-Mode was developed. The proposed algorithm has been derived using the bus protocol specification and implemented in hardware via a hardware description language. The correct operation of the algorithm was proofed by applying the hardware system on a sample communication. The paper also describes the development process of embedded systems and provides information on aspects regarding hardware modelling including a mathematical description of the TWI protocol is provided

    Fully automatic worst-case execution time analysis for MATLAB/Simulink models

    Get PDF
    “This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder." “Copyright IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.”In today's technical world (e.g., in the automotive industry), more and more purely mechanical components get replaced by electro-mechanical ones. Thus the size and complexity of embedded systems steadily increases. To cope with this development, comfortable software engineering tools are being developed that allow a more functionality-oriented development of applications. The paper demonstrates how worst-case execution time (WCET) analysis is integrated into such a high-level application design and simulation tool MATLAB/Simulink-thus providing a higher-level interface to WCET analysis. The MATLAB/Simulink extensions compute and display worst-case timing data for all blocks of a MATLAB/Simulink simulation, which gives the developer of an application valuable feedback about the correct timing of the application being developed. The solution facilitates a fully-automated WCET analysis, i.e., in contrast to existing approaches the programmer does not have to provide path information

    Verification and Validation of Sensor Networks

    Get PDF
    Sensor networks play an increasingly important role in critical systems infrastructure and should be correct, reliable and robust. In order to achieve these performance goals, it is necessary to verify the correctness of system software and to validate the more broadly defined world and system models. This includes: * Physical Phenomena (PDE models, statistical models, etc.), * Signals (Equations of state, physical properties, etc.), * Sensors (Physics models, noise models, etc.), * Hardware (Failure models, power consumption models, etc.), * RF (Antenna models, bandwidth, delay, propagation, etc.), * Embedded Code (Correctness, complexity, context), * Distributed Algorithms (Correctness, concurrency models, etc.), * Overall Sensor Network and Environment Models (Percolation theory, wave theory, information theory, simulation, etc.). We outline some of the V & V issues involved in the various aspects of sensor networks as well as possible approaches to their development and application both in simulation and in operational deployed systems
    • …
    corecore