8,604 research outputs found

    Formal Verification of Probabilistic SystemC Models with Statistical Model Checking

    Full text link
    Transaction-level modeling with SystemC has been very successful in describing the behavior of embedded systems by providing high-level executable models, in which many of them have inherent probabilistic behaviors, e.g., random data and unreliable components. It thus is crucial to have both quantitative and qualitative analysis of the probabilities of system properties. Such analysis can be conducted by constructing a formal model of the system under verification and using Probabilistic Model Checking (PMC). However, this method is infeasible for large systems, due to the state space explosion. In this article, we demonstrate the successful use of Statistical Model Checking (SMC) to carry out such analysis directly from large SystemC models and allow designers to express a wide range of useful properties. The first contribution of this work is a framework to verify properties expressed in Bounded Linear Temporal Logic (BLTL) for SystemC models with both timed and probabilistic characteristics. Second, the framework allows users to expose a rich set of user-code primitives as atomic propositions in BLTL. Moreover, users can define their own fine-grained time resolution rather than the boundary of clock cycles in the SystemC simulation. The third contribution is an implementation of a statistical model checker. It contains an automatic monitor generation for producing execution traces of the model-under-verification (MUV), the mechanism for automatically instrumenting the MUV, and the interaction with statistical model checking algorithms.Comment: Journal of Software: Evolution and Process. Wiley, 2017. arXiv admin note: substantial text overlap with arXiv:1507.0818

    Dependability Analysis of Control Systems using SystemC and Statistical Model Checking

    Get PDF
    Stochastic Petri nets are commonly used for modeling distributed systems in order to study their performance and dependability. This paper proposes a realization of stochastic Petri nets in SystemC for modeling large embedded control systems. Then statistical model checking is used to analyze the dependability of the constructed model. Our verification framework allows users to express a wide range of useful properties to be verified which is illustrated through a case study

    Methodology for the Fault Analysis and Evaluation of True Random Number Generators

    Get PDF
    15 pages (double columns)True Random Number Generators have many uses, in particular they play a key role in security applications and cryptographic algorithms. Our interest lies in the quality of their generated random numbers. More specifically, for such utilizations, a slight deviation of the numbers from a "per- fect" behavior can have disastrous consequences. It is then necessary to devise schemes for the testing of these genera- tors in order to detect non-random properties of their num- bers. Moreover, one should consider them from an attacker point of view and use any means to try to perturbate their good functionnality. In this article we describe such exper- iments and several standard statistical tools for the genera- tors testing. We also present experimental results obtained through the study of a generator embedded in a processor in order to illustrate our methodology. We show that its pertu- bation leads to the apparition of dangerous deviations in its numbers distribution

    TrustShadow: Secure Execution of Unmodified Applications with ARM TrustZone

    Full text link
    The rapid evolution of Internet-of-Things (IoT) technologies has led to an emerging need to make it smarter. A variety of applications now run simultaneously on an ARM-based processor. For example, devices on the edge of the Internet are provided with higher horsepower to be entrusted with storing, processing and analyzing data collected from IoT devices. This significantly improves efficiency and reduces the amount of data that needs to be transported to the cloud for data processing, analysis and storage. However, commodity OSes are prone to compromise. Once they are exploited, attackers can access the data on these devices. Since the data stored and processed on the devices can be sensitive, left untackled, this is particularly disconcerting. In this paper, we propose a new system, TrustShadow that shields legacy applications from untrusted OSes. TrustShadow takes advantage of ARM TrustZone technology and partitions resources into the secure and normal worlds. In the secure world, TrustShadow constructs a trusted execution environment for security-critical applications. This trusted environment is maintained by a lightweight runtime system that coordinates the communication between applications and the ordinary OS running in the normal world. The runtime system does not provide system services itself. Rather, it forwards requests for system services to the ordinary OS, and verifies the correctness of the responses. To demonstrate the efficiency of this design, we prototyped TrustShadow on a real chip board with ARM TrustZone support, and evaluated its performance using both microbenchmarks and real-world applications. We showed TrustShadow introduces only negligible overhead to real-world applications.Comment: MobiSys 201

    Transparent code authentication at the processor level

    Get PDF
    The authors present a lightweight authentication mechanism that verifies the authenticity of code and thereby addresses the virus and malicious code problems at the hardware level eliminating the need for trusted extensions in the operating system. The technique proposed tightly integrates the authentication mechanism into the processor core. The authentication latency is hidden behind the memory access latency, thereby allowing seamless on-the-fly authentication of instructions. In addition, the proposed authentication method supports seamless encryption of code (and static data). Consequently, while providing the software users with assurance for authenticity of programs executing on their hardware, the proposed technique also protects the software manufacturers’ intellectual property through encryption. The performance analysis shows that, under mild assumptions, the presented technique introduces negligible overhead for even moderate cache sizes

    Translating expert system rules into Ada code with validation and verification

    Get PDF
    The purpose of this ongoing research and development program is to develop software tools which enable the rapid development, upgrading, and maintenance of embedded real-time artificial intelligence systems. The goals of this phase of the research were to investigate the feasibility of developing software tools which automatically translate expert system rules into Ada code and develop methods for performing validation and verification testing of the resultant expert system. A prototype system was demonstrated which automatically translated rules from an Air Force expert system was demonstrated which detected errors in the execution of the resultant system. The method and prototype tools for converting AI representations into Ada code by converting the rules into Ada code modules and then linking them with an Activation Framework based run-time environment to form an executable load module are discussed. This method is based upon the use of Evidence Flow Graphs which are a data flow representation for intelligent systems. The development of prototype test generation and evaluation software which was used to test the resultant code is discussed. This testing was performed automatically using Monte-Carlo techniques based upon a constraint based description of the required performance for the system

    Dynamic Verification of SystemC with Statistical Model Checking

    Get PDF
    Many embedded and real-time systems have a inherent probabilistic behaviour (sensors data, unreliable hardware,...). In that context, it is crucial to evaluate system properties such as "the probability that a particular hardware fails". Such properties can be evaluated by using probabilistic model checking. However, this technique fails on models representing realistic embedded and real-time systems because of the state space explosion. To overcome this problem, we propose a verification framework based on Statistical Model Checking. Our framework is able to evaluate probabilistic and temporal properties on large systems modelled in SystemC, a standard system-level modelling language. It is fully implemented as an extension of the Plasma-lab statistical model checker. We illustrate our approach on a multi-lift system case study

    Optimizing Scrubbing by Netlist Analysis for FPGA Configuration Bit Classification and Floorplanning

    Full text link
    Existing scrubbing techniques for SEU mitigation on FPGAs do not guarantee an error-free operation after SEU recovering if the affected configuration bits do belong to feedback loops of the implemented circuits. In this paper, we a) provide a netlist-based circuit analysis technique to distinguish so-called critical configuration bits from essential bits in order to identify configuration bits which will need also state-restoring actions after a recovered SEU and which not. Furthermore, b) an alternative classification approach using fault injection is developed in order to compare both classification techniques. Moreover, c) we will propose a floorplanning approach for reducing the effective number of scrubbed frames and d), experimental results will give evidence that our optimization methodology not only allows to detect errors earlier but also to minimize the Mean-Time-To-Repair (MTTR) of a circuit considerably. In particular, we show that by using our approach, the MTTR for datapath-intensive circuits can be reduced by up to 48.5% in comparison to standard approaches
    • 

    corecore