122 research outputs found

    Simulation-based Fault Injection with QEMU for Speeding-up Dependability Analysis of Embedded Software

    Get PDF
    Simulation-based fault injection (SFI) represents a valuable solu- tion for early analysis of software dependability and fault tolerance properties before the physical prototype of the target platform is available. Some SFI approaches base the fault injection strategy on cycle-accurate models imple- mented by means of Hardware Description Languages (HDLs). However, cycle- accurate simulation has revealed to be too time-consuming when the objective is to emulate the effect of soft errors on complex microprocessors. To overcome this issue, SFI solutions based on virtual prototypes of the target platform has started to be proposed. However, current approaches still present some draw- backs, like, for example, they work only for specific CPU architectures, or they require code instrumentation, or they have a different target (i.e., design errors instead of dependability analysis). To address these disadvantages, this paper presents an efficient fault injection approach based on QEMU, one of the most efficient and popular instruction-accurate emulator for several microprocessor architectures. As main goal, the proposed approach represents a non intrusive technique for simulating hardware faults affecting CPU behaviours. Perma- nent and transient/intermittent hardware fault models have been abstracted without losing quality for software dependability analysis. The approach mini- mizes the impact of the fault injection procedure in the emulator performance by preserving the original dynamic binary translation mechanism of QEMU. Experimental results for both x86 and ARM processors proving the efficiency and effectiveness of the proposed approach are presented

    B-HAR: an open-source baseline framework for in depth study of human activity recognition datasets and workflows

    Full text link
    Human Activity Recognition (HAR), based on machine and deep learning algorithms is considered one of the most promising technologies to monitor professional and daily life activities for different categories of people (e.g., athletes, elderly, kids, employers) in order to provide a variety of services related, for example to well-being, empowering of technical performances, prevention of risky situation, and educational purposes. However, the analysis of the effectiveness and the efficiency of HAR methodologies suffers from the lack of a standard workflow, which might represent the baseline for the estimation of the quality of the developed pattern recognition models. This makes the comparison among different approaches a challenging task. In addition, researchers can make mistakes that, when not detected, definitely affect the achieved results. To mitigate such issues, this paper proposes an open-source automatic and highly configurable framework, named B-HAR, for the definition, standardization, and development of a baseline framework in order to evaluate and compare HAR methodologies. It implements the most popular data processing methods for data preparation and the most commonly used machine and deep learning pattern recognition models.Comment: 9 Pages, 3 Figures, 3 Tables, Link to B-HAR Library: https://github.com/B-HAR-HumanActivityRecognition/B-HA

    Efficient Control-Flow Subgraph Matching for Detecting Hardware Trojans in RTL Models

    Get PDF
    Only few solutions for Hardware Trojan (HT) detection work at Register-Transfer Level (RTL), thus delaying the identification of possible security issues at lower abstraction levels of the design process. In addition, the most of existing approaches work only for specific kinds of HTs. To overcome these limitations, we present a verification approach that detects different types of HTs in RTL models by exploiting an efficient control-flow subgraph matching algorithm. The prototypes of HTs that can be detected are modelled in a library by using Control-Flow Graphs (CFGs) that can be parametrised and extended to cover several variants of Trojan patterns. Experimental results show that our approach is effective and efficient in comparison with other state-of-the-art solutions

    SHPIA 2.0: An Easily Scalable, Low-Cost, Multi-purpose Smart Home Platform for Intelligent Applications

    Get PDF
    Sensors, electronic devices, and smart systems have invaded the market and our daily lives. As a result, their utility in smart home contexts to improve the quality of life, especially for the elderly and people with special needs, is getting stronger and stronger. Therefore, many systems based on smart applications and intelligent devices have been developed, for example, to monitor people’s environmental contexts, help in daily-life activities, and analyze their health status. However, most existing solutions have drawbacks related to accessibility and usability. They tend to be expensive and lack generality and interoperability. These solutions are not easily scalable and are typically designed for specific constrained scenarios. This paper tackles such drawbacks by presenting SHPIA 2.0, an easily scalable, low-cost, multi-purpose smart home platform for intelligent applications. It leverages low-cost Bluetooth Low Energy (BLE) devices featuring both BLE connected and BLE broadcast modes, to transform common objects of daily life into smart objects. Moreover, SHPIA 2.0 allows the col- lection and automatic labeling of different data types to provide indoor monitoring and assistance. Specifically, SHPIA 2.0 is designed to be adaptable to different home-based application scenarios, including human activity recognition, coaching systems, and occupancy detection and counting. The SHPIA platform is open source and freely available to the scientific community, fostering collaboration and innovation

    Testbench qualification of SystemC TLM protocols through Mutation Analysis

    Get PDF
    Transaction-level modeling (TLM) has become the de-facto reference modeling style for system-level design and verification of embedded systems. It allows designers to implement high-level communication protocols for simulations up to 1000x faster than at register-transfer level (RTL). To guarantee interoperability between TLM IP suppliers and users, designers implement the TLM communication protocols by relying on a reference standard, such as the standard OSCI for SystemC TLM. Functional correctness of such protocols as well as their compliance to the reference TLM standard are usually verified through user-defined testbenches, which high-quality and completeness play a key role for an efficient TLM design and verification flow. This article presents a methodology to apply mutation analysis, a technique applied in literature for SW testing, for measuring the testbench quality in verifying TLM protocols. In particular, the methodology aims at (i) qualifying the testbenches by considering both the TLM protocol correctness and their compliance to a defined standard (i.e., OSCI TLM), (ii) optimizing the simulation time during mutation analysis by avoiding mutation redundancies, and (iii) driving the designers in the testbench improvement. Experimental results on benchmarks of different complexity and architectural characteristics are reported to analyze the methodology applicability

    RTL property abstraction for TLM assertion-based verification

    Get PDF
    Different techniques and commercial tools are at the state of the art to reuse existing RTL IP implementations to generate more abstract (i.e., TLM) IP models for system-level design. In contrast, reusing, at TLM, an assertion-based verification (ABV) environment originally developed for an RTL IP is still an open problem. The lack of an effective and efficient solution forces verification engineers to shoulder a time consuming and error-prone manual re-definition, at TLM, of existing assertion libraries. This paper is intended to fill in the gap by presenting a technique toautomatically abstract properties defined for RTL IPs with the aim of creating dynamic ABV environments for the corresponding TLM models

    Test Generation Based on CLP

    Get PDF
    Functional ATPGs based on simulation are fast, but generally, they are unable to cover corner cases, and they cannot prove untestability. On the contrary, functional ATPGs exploiting formal methods, being exhaustive, cover corner cases, but they tend to suffer of the state explosion problem when adopted for verifying large designs. In this context, we have defined a functional ATPG that relies on the joint use of pseudo-deterministic simulation and Constraint Logic Programming (CLP), to generate high-quality test sequences for solving complex problems. Thus, the advantages of both simulation-based and static-based verification techniques are preserved, while their respective drawbacks are limited. In particular, CLP, a form of constraint programming in which logic programming is extended to include concepts from constraint satisfaction, is well-suited to be jointly used with simulation. In fact, information learned during design exploration by simulation can be effectively exploited for guiding the search of a CLP solver towards DUV areas not covered yet. The test generation procedure relies on constraint logic programming (CLP) techniques in different phases of the test generation procedure. The ATPG framework is composed of three functional ATPG engines working on three different models of the same DUV: the hardware description language (HDL) model of the DUV, a set of concurrent EFSMs extracted from the HDL description, and a set of logic constraints modeling the EFSMs. The EFSM paradigm has been selected since it allows a compact representation of the DUV state space that limits the state explosion problem typical of more traditional FSMs. The first engine is randombased, the second is transition-oriented, while the last is fault-oriented. The test generation is guided by means of transition coverage and fault coverage. In particular, 100% transition coverage is desired as a necessary condition for fault detection, while the bit coverage functional fault model is used to evaluate the effectiveness of the generated test patterns by measuring the related fault coverage. A random engine is first used to explore the DUV state space by performing a simulation-based random walk. This allows us to quickly fire easy-to-traverse (ETT) transitions and, consequently, to quickly cover easy-to-detect (ETD) faults. However, the majority of hard-to-traverse (HTT) transitions remain, generally, uncovered. Thus, a transition-oriented engine is applied to cover the remaining HTT transitions by exploiting a learning/backjumping-based strategy. The ATPG works on a special kind of EFSM, called SSEFSM, whose transitions present the most uniformly distributed probability of being activated and can be effectively integrated to CLP, since it allows the ATPG to invoke the constraint solver when moving between EFSM states. A constraint logic programming-based (CLP) strategy is adopted to deterministically generate test vectors that satisfy the guard of the EFSM transitions selected to be traversed. Given a transition of the SSEFSM, the solver is required to generate opportune values for PIs that enable the SSEFSM to move across such a transition. Moreover, backjumping, also known as nonchronological backtracking, is a special kind of backtracking strategy which rollbacks from an unsuccessful situation directly to the cause of the failure. Thus, the transition-oriented engine deterministically backjumps to the source of failure when a transition, whose guard depends on previously set registers, cannot be traversed. Next it modifies the EFSM configuration to satisfy the condition on registers and successfully comes back to the target state to activate the transition. The transition-oriented engine generally allows us to achieve 100% transition coverage. However, 100% transition coverage does not guarantee to explore all DUV corner cases, thus some hard-to-detect (HTD) faults can escape detection preventing the achievement of 100% fault coverage. Therefore, the CLP-based fault-oriented engine is finally applied to focus on the remaining HTD faults. The CLP solver is used to deterministically search for sequences that propagate the HTD faults observed, but not detected, by the random and the transition-oriented engine. The fault-oriented engine needs a CLP-based representation of the DUV, and some searching functions to generate test sequences. The CLP-based representation is automatically derived from the S2EFSM models according to the defined rules, which follow the syntax of the ECLiPSe CLP solver. This is not a trivial task, since modeling the evolution in time of an EFSM by using logic constraints is really different with respect to model the same behavior by means of a traditional HW description language. At first, the concept of time steps is introduced, required to model the SSEFSM evolution through the time via CLP. Then, this study deals with modeling of logical variables and constraints to represent enabling functions and update functions of the SSEFSM. Formal tools that exhaustively search for a solution frequently run out of resources when the state space to be analyzed is too large. The same happens for the CLP solver, when it is asked to find a propagation sequence on large sequential designs. Therefore we have defined a set of strategies that allow to prune the search space and to manage the complexity problem for the solver

    Towards a wearable system for predicting the freezing of gait in people affected by Parkinson's disease

    Get PDF
    Some wearable solutions exploiting on-body acceleration sensors have been proposed to recognize Freezing of Gait (FoG) in people affected by Parkinson Disease (PD). Once a FoG event is detected, these systems generate a sequence of rhythmic stimuli to allow the patient restarting the march. While these solutions are effective in detecting FoG events, they are unable to predict FoG to prevent its occurrence. This paper fills in the gap by presenting a machine learning-based approach that classifies accelerometer data from PD patients, recognizing a pre-FOG phase to further anticipate FoG occurrence in advance. Gait was monitored by three tri-axial accelerometer sensors worn on the back, hip and ankle. Gait features were then extracted from the accelerometer's raw data through data windowing and non-linear dimensionality reduction. A k-nearest neighbor algorithm (k-NN) was used to classify gait in three classes of events: pre-FoG, no-FoG and FoG. The accuracy of the proposed solution was compared to state of-the-art approaches. Our study showed that: (i) we achieved performances overcoming the state-of-the-art approaches in terms of FoG detection, (ii) we were able, for the very first time in the literature, to predict FoG by identifying the pre-FoG events with an average sensitivity and specificity of, respectively, 94.1% and 97.1%, and (iii) our algorithm can be executed on resource-constrained devices. Future applications include the implementation on a mobile device, and the administration of rhythmic stimuli by a wearable device to help the patient overcome the FoG

    Mangrove: an Inference-based Dynamic Invariant Mining for GPU Architectures

    Get PDF
    Likely invariants model properties that hold in operating conditions of a computing system. Dynamic mining of invariants aims at extracting logic formulas representing such properties from the system execution traces, and it is widely used for verification of intellectual property (IP) blocks. Although the extracted formulas represent likely invariants that hold in the considered traces, there is no guarantee that they are true in general for the system under verification. As a consequence, to increase the probability that the mined invariants are true in general, dynamic mining has to be performed to large sets of representative execution traces. This makes the execution-based mining process of actual IP blocks very time-consuming due to the trace lengths and to the large sets of monitored signals. This article presents extit{Mangrove}, an efficient implementation of a dynamic invariant mining algorithm for GPU architectures. Mangrove exploits inference rules, which are applied at run time to filter invariants from the execution traces and, thus, to sensibly reduce the problem complexity. Mangrove allows users to define invariant templates and, from these templates, it automatically generates kernels for parallel and efficient mining on GPU architectures. The article presents the tool, the analysis of its performance, and its comparison with the best sequential and parallel implementations at the state of the art

    Exploiting GPU Architectures for Dynamic Invariant Mining

    Get PDF
    Dynamic mining of invariants is a class of approaches to extract logic formulas from the execution traces of a system under verification (SUV), with the purpose of expressing stable conditions in the behaviour of the SUV. The mined formulas represent likely invariants for the SUV, which certainly hold on the considered traces, but there is no guarantee that they are true in general. A large set of representative execution traces must be analysed to increase the probability that mined invariants are generally true. However, this becomes extremely time-consuming for current sequential approaches when long execution traces and large set of SUV variables are considered. To overcome this limitation, the paper presents a parallel approach for invariant mining that exploits GPU architectures for processing an execution trace composed of millions of clock cycles in few seconds
    corecore