14 research outputs found

    DELOOP: Automatic Flow Facts Computation using Dynamic Symbolic Execution

    Get PDF
    Constructing a complete control-flow graph (CGF) and computing upper bounds on loops of a computing system are essential to safely estimate the worst-case execution time (WCET) of real-time tasks. WCETs are required for verifying the timing requirements of a real-time computing system. Therefore, we propose an analysis using dynamic symbolic execution (DSE) that detects and computes upper bounds on the loops, and resolves indirect jumps. The proposed analysis constructs and initializes memory models, then it uses a satisfiability modulo theories (SMT) solver to symbolically execute the instructions. The analysis showed higher precision in bounding loops of the Mälardalen benchmarks comparing to SWEET and oRange. We integrated our analysis with the OTAWA toolbox for performing a WCET analysis. Then, we used the proposed analysis for estimating the WCET of functions in a use case inspired by an aerospace project

    Temporal-Based Intrusion Detection for IoV

    Get PDF
    The Internet of Vehicle (IoV) is an extension of Vehicle-to-Vehicle (V2V) communication that can improve vehicles’ fully autonomous driving capabilities. However, these communications are vulnerable to many attacks. Therefore, it is critical to provide run-time mechanisms to detect malware and stop the attackers before they manage to gain a foothold in the system. Anomaly-based detection techniques are convenient and capable of detecting off-nominal behavior by the component caused by zero-day attacks. One significant critical aspect when using anomaly-based techniques is ensuring the correct definition of the observed component’s normal behavior. In this paper, we propose using the task’s temporal specification as a baseline to define its normal behavior and identify temporal thresholds that give the system the ability to predict malicious tasks. By applying our solution on one use-case, we got temporal thresholds 20–40 % less than the one usually used to alarm the system about security violations. Using our boundaries ensures the early detection of off-nominal temporal behavior and provides the system with a sufficient amount of time to initiate recovery actions

    Tasking Modeling Language: A toolset for model-based engineering of data-driven software systems

    Get PDF
    The interdisciplinary process of space systems engineering poses challenges for the development of the on-board software. The software integrates components from different domains and organizations and has to fulfill requirements, such as robustness, reliability, and real-time capability. Model-based methods not only help to give a comprehensive overview, but also improve productivity by allowing artifacts to be generated from the model automatically. However, general-purpose modeling languages, such as the Systems Modeling Language~(SysML), are not always adequate because of their ambiguity resulting from their generic nature. Furthermore, sensor data handling, analysis, and processing of data in on-board software requires focus on the system’s data flow and event mechanism. To achieve this, we developed the Tasking Modeling Language~(TML) which allows system engineers to model complex event-driven software systems in a simplified way and to generate software from the model. Type and consistency checks on the formal level help to reduce errors early in the engineering process. TML is focused on data-driven systems and its models are designed to be extended and customized to specific mission requirements. This paper describes the architecture of TML in detail, explains the base technology, the methodology, and the developed domain specific languages~(DSLs). It evaluates the design approach of the software via a case study and presents advantages as well as challenges faced

    Enabling Rapid Development of On-board Applications: Securing a Spacecraft Middleware by Separation and Isolation

    Get PDF
    Today’s space missions require increasingly powerful hardware to achieve their mission objectives, such as high-resolution Earth observation or autonomous decision-making in deep space. At the same time, system availability and reliability require- ments remain high due to the harsh environment in which the system operates. This leads to an engineering trade-off between the use of reliable and high performance hardware. To overcome this trade-off, the German Aerospace Center (DLR) is developing a special computer architecture that combines both reliable computing hardware with high-performance commercial-off-the- shelf (COTS) hardware. This computer architecture is called Scalable On-Board Computing for Space Avionics (ScOSA) and is currently being prepared for demonstration on a CubeSat, also known as the ScOSA Flight Experiment [1]. The ScOSA software consists of a middleware to execute distributed applications, perform critical on-board software functionalities, and do fault detection and recovery tasks. The software is based on the Distributed Tasking Framework which is a derivate of the open-source, data-flow oriented Tasking Framework [2], for this reason, developers organize their applications as a set of tasks and channels. The middleware handles the task distribution among the nodes [3]. ScOSA will detect failing compute nodes and reallocate tasks to maintain the availability of the entire system. The middleware can also change the set of allocated tasks to support different mission phases. Thus, ScOSA allows software to be reloaded and executed after startup. By this the software can be tested quickly and safely on the system. Combined with an upload strategy, ScOSA can be used for in-situ testing of on-board applications. Since ScOSA will also perform mission-critical tasks, such as an Attitude and Orbit Control System or a Command and Data Handling System, the opening of the platform leads to the problem of mixed criticality [4]. This problem is already present in the ScOSA Flight Experiment, since the demonstration will include typical satellite applications developed by different teams in the DLR. Thus, not only the teams implement different quality standards for their software, but also the applications themselves have different Technical Readiness Levels (TRLs). The challenge of mixed criticality is often met by completely separating and isolating the different software components, e.g. by using a hypervisor or a separation kernel [5], [6]. Due to the distributed nature of the ScOSA system and its execution platform a separation using hypervisor technique is not easily achievable. For this reason, we discuss in this work how we separate the critical services and communication components into their own Linux process to guarantee that best-effort applications are not inflicting the critical components of the middleware. We also consider and discuss in this work how to implement further mechanisms of the Linux kernel in order to strengthen the separation, i.e. the cgroups and the kernel namespaces. However, a complete isolation between software components is undesirable, due to the necessary interaction between them. Given that the applications themselves can be spread over several nodes, the application tasks need to communicate and this can be only done if the critical software components relays messages from other nodes to the separated application processes. For this reason the middleware provides a relay service which takes care of the intra-node-inter-process-communication. Using a relaying mechanism simplifies development and does not require a complete rewrite of the existing middleware network stack. The proposed techniques were applied in a case study to integrate applications of unknown quality standards into the ScOSA software system in an agile way. We discuss how the presented measures ensure that the resultant software is sufficiently tested and meets the required quality level. Finally, we discuss possible improvements to our existing separation and isolation solution for ScOSA and outline how these techniques can be used in other platforms such as the RTEMS operating system

    ScOSA system software: the reliable and scalable middleware for a heterogeneous and distributed on-board computer architecture

    Get PDF
    Designing on-board computers (OBC) for future space missions is determined by the trade-off between reliability and performance. Space applications with higher computational demands are not supported by currently available, state-of-the-art, space-qualified computing hardware, since their requirements exceed the capabilities of these components. Such space applications include Earth observation with high-resolution cameras, on-orbit real-time servicing, as well as autonomous spacecraft and rover missions on distant celestial bodies. An alternative to state-of-the-art space-qualified computing hardware is the use of commercial-off-the-shelf (COTS) components for the OBC. Not only are these components cheap and widely available, but they also achieve high performance. Unfortunately, they are also significantly more vulnerable to errors induced by radiation than space-qualified components. The ScOSA (Scalable On-board Computing for Space Avionics) Flight Experiment project aims to develop an OBC architecture which avoids this trade-off by combining space-qualified radiation-hardened components (the reliable computing nodes, RCNs) together with COTS components (the high performance nodes, HPNs) into a single distributed system. To abstract this heterogeneous architecture for the application developers, we are developing a middleware for the aforementioned OBC architecture. Besides providing an monolithic abstraction of the distributed system, the middleware shall also enhance the architecture by providing additional reliability and fault tolerance. In this paper, we present the individual components comprising the middleware, alongside the features the middleware offers. Since the ScOSA Flight Experiment project is a successor of the OBC-NG and the ScOSA projects, its middleware is also a further development of the existing middleware. Therefore, we will present and discuss our contributions and plans for enhancement of the middleware in the course of the current project. Finally, we will present first results for the scalability of the middleware, which we obtained by conducting software-in-the-loop experiments of different sized scenarios

    RDMA-Based Deterministic Communication Architecture for Autonomous Driving

    Get PDF
    Autonomous driving is a big challenge for nextgeneration vehicles and requires multiple computationallyintensive deep neural networks (DNNs) to be implemented on distributed automotive platforms. Distributed software—enabling autonomous functionalities—has strict timing requirements, e.g., low and deterministic end-to-end latency. Such timings rely on the communication technologies used in the automotive platform, as much on the computation performance of CPUs, GPUs, TPUs, and FPGAs. Hence, we advocate the use of Remote Direct Memory Access (RDMA) technology—typically used in data centers—in automotive platforms. As shown by our experiments with real hardware, Soft-RoCE (software implementation of RDMA) offers low latency communication because of minimal CPU involvement and reduced memory copies. Simultaneously, we show that the native implementation of RDMA does not support determinism, i.e., there is a high variation in communication delays in the presence of interfering data packets. To mitigate this issue, we propose a multi-layer communication stack comprising a deterministic scheduler on top of the SoftRoCE layer. Further, we have developed a C++ library that offers easy-to-use communication interfaces for distributed applications while implementing the proposed architecture. Experiments show that our library (i) reduces the end-to-end latency of distributed object detection by nearly 9% while having an implementation overhead of less than 1.5% and (ii) minimizes the effects of other data traffic on the delay in high-priority communication

    Parallelizing On-Board Data Analysis Applications for a Distributed Processing Architecture

    Get PDF
    Satellite-based applications produce ever-increasing quantities of data, challenging the capabilities of existing telemetry and on-board processing systems, especially when results must be transmitted quickly to ground. The Scalable On-Board Computing for Space Avionics (ScOSA) platform contributes the processing capability necessary to perform such computationally intensive analysis on-board. This platform offers a high-performance on-board computer by combining multiple commercial off-the-shelf processors and space-grade processors into a distributed computer. Middleware ensures reliability by detecting and mitigating faults, while allowing applications to effectively use multiple, distributed processors. The current work aims to demonstrate the use and advantages of utilizing the data-flow programming paradigm supported by the ScOSA platform to provide high-throughput on-board analysis. This enables rapid analysis even for applications requiring high frame rates, high resolutions, multi-spectral imaging or in-depth processing. The On-Board Data Analysis and Real-Time Information System (ODARIS) is used to demonstrate this method. ODARIS is a system for providing low-latency access to satellite-based observations, even when large quantities of sensor data are involved. By performing on-board processing of the data from the satellite-borne instruments, the amount of data which must be sent to ground is drastically reduced. This allows the use of low-latency telecommunication-satellite constellations for communicating with ground to achieve query-response times of only a few minutes. The current application combines an Earth-observation camera with AI-based image processing to provide real-time object detection. In the data-flow driven implementation of ODARIS on the ScOSA platform, images are captured by a camera and sent to any of several processors for the computationally intensive image processing. This allows multiple images to be processed in parallel by as many processors as are available, while avoiding the need to divide each image across several processors. The results are transferred to an on-board database from which queries can be served asynchronously. The system will be tested in configurations with one, two and three processors and the resulting image throughput presented. Testing is performed on a ground-based prototype system using pre-recorded images. This paper presents the necessary details of the underlying ScOSA and ODARIS systems as well as the implementation of the objection-detection algorithm using a parallelized, data-flow model. The results of executing the system using a variable number of processors are presented to demonstrate the improvement in image throughput and its potential application to other computationally-intensive tasks

    Deadline Miss Models for Temporarily Overloaded Systems

    Get PDF
    A wide range of embedded systems falls into the category of safety-critical systems. Such systems impose different levels of safety requirements depending on how critical the functions assigned to the system are and on how humans interact with the system. Safety requirements involve timing constraints, the violation of which may lead to a system failure. Timing constraints are graded from soft to hard real-time constraints. While satisfying soft real-time constraints requires only best-efforts guarantees, hard real-time constraints are best-treated with worst-case analysis methods for verifying all timing constraints. Weakly-hard real-time systems have extra demands on the timing verification as they tolerate few deadline-misses in certain distributions. Applying worst-case analysis methods, in which a task is schedulable only when it can meet its deadline in the worst-case, to weakly-hard real-time systems questions the expressiveness of the computed guarantees. Considering tolerable deadline-misses raises the need for weakly-hard schedulability analyses to verify weakly-hard real-time constraints and to provide more expressive guarantees. This thesis addresses the schedulability analysis problem of weakly-hard realtime systems. It presents an efficient analysis to compute weakly-hard real-time guarantees in the form of a deadline miss model for various system models. The first contribution is a deadline miss model for a temporarily overloaded uniprocessor system with independent tasks under the Fixed Priority Preemptive and NonPreemptive scheduling policy (FPP & FPNP) using Typical Worst-Case Analysis. In our application context, the transient overload is due to sporadic tasks, for example, interrupt service routines. We adopt the proposed analysis to compute deadline miss models for independent tasks under the Earliest Deadline First (EDF) and Weighted Round-Robin (WRR) scheduling policies. In the second contribution, we extend the analysis to compute deadline miss models for task chains. The extension is motivated by an industrial case study. The third contribution of this thesis targets the system extensibility to budget under-specified tasks in a weakly-hard real-time system. Adding recovery or reconfiguration tasks such that the system still meets its weakly-hard timing constraints is of interest of an industrial case study (satellite on-board software) that is considered in this thesis. We show formally and in experiments with synthetic as well as industrial test cases that the analysis presented in this thesis can consider various scheduling policies (FPP, FPNP, EDF, WRR), and can be extended to cover both independent and dependent tasks. The thesis provides two practical solutions for two industrial case studies, which are involved exclusively in a collaboration project between Thales Research & Technology and iTUBS, which is a technology transfer company associated with Technische Universität Braunschweig. The results are thus of real practical value to be considered in the design process of weakly-hard real-time systems

    DELOOP: Automatic Flow Facts Computation Using Dynamic Symbolic Execution

    Get PDF
    corecore