1,175 research outputs found

    IEEE Standard 1500 Compliance Verification for Embedded Cores

    Get PDF
    Core-based design and reuse are the two key elements for an efficient system-on-chip (SoC) development. Unfortunately, they also introduce new challenges in SoC testing, such as core test reuse and the need of a common test infrastructure working with cores originating from different vendors. The IEEE 1500 Standard for Embedded Core Testing addresses these issues by proposing a flexible hardware test wrapper architecture for embedded cores, together with a core test language (CTL) used to describe the implemented wrapper functionalities. Several intellectual property providers have already announced IEEE Standard 1500 compliance in both existing and future design blocks. In this paper, we address the problem of guaranteeing the compliance of a wrapper architecture and its CTL description to the IEEE Standard 1500. This step is mandatory to fully trust the wrapper functionalities in applying the test sequences to the core. We present a systematic methodology to build a verification framework for IEEE Standard 1500 compliant cores, allowing core providers and/or integrators to verify the compliance of their products (sold or purchased) to the standar

    Test Generation Based on CLP

    Get PDF
    Functional ATPGs based on simulation are fast, but generally, they are unable to cover corner cases, and they cannot prove untestability. On the contrary, functional ATPGs exploiting formal methods, being exhaustive, cover corner cases, but they tend to suffer of the state explosion problem when adopted for verifying large designs. In this context, we have defined a functional ATPG that relies on the joint use of pseudo-deterministic simulation and Constraint Logic Programming (CLP), to generate high-quality test sequences for solving complex problems. Thus, the advantages of both simulation-based and static-based verification techniques are preserved, while their respective drawbacks are limited. In particular, CLP, a form of constraint programming in which logic programming is extended to include concepts from constraint satisfaction, is well-suited to be jointly used with simulation. In fact, information learned during design exploration by simulation can be effectively exploited for guiding the search of a CLP solver towards DUV areas not covered yet. The test generation procedure relies on constraint logic programming (CLP) techniques in different phases of the test generation procedure. The ATPG framework is composed of three functional ATPG engines working on three different models of the same DUV: the hardware description language (HDL) model of the DUV, a set of concurrent EFSMs extracted from the HDL description, and a set of logic constraints modeling the EFSMs. The EFSM paradigm has been selected since it allows a compact representation of the DUV state space that limits the state explosion problem typical of more traditional FSMs. The first engine is randombased, the second is transition-oriented, while the last is fault-oriented. The test generation is guided by means of transition coverage and fault coverage. In particular, 100% transition coverage is desired as a necessary condition for fault detection, while the bit coverage functional fault model is used to evaluate the effectiveness of the generated test patterns by measuring the related fault coverage. A random engine is first used to explore the DUV state space by performing a simulation-based random walk. This allows us to quickly fire easy-to-traverse (ETT) transitions and, consequently, to quickly cover easy-to-detect (ETD) faults. However, the majority of hard-to-traverse (HTT) transitions remain, generally, uncovered. Thus, a transition-oriented engine is applied to cover the remaining HTT transitions by exploiting a learning/backjumping-based strategy. The ATPG works on a special kind of EFSM, called SSEFSM, whose transitions present the most uniformly distributed probability of being activated and can be effectively integrated to CLP, since it allows the ATPG to invoke the constraint solver when moving between EFSM states. A constraint logic programming-based (CLP) strategy is adopted to deterministically generate test vectors that satisfy the guard of the EFSM transitions selected to be traversed. Given a transition of the SSEFSM, the solver is required to generate opportune values for PIs that enable the SSEFSM to move across such a transition. Moreover, backjumping, also known as nonchronological backtracking, is a special kind of backtracking strategy which rollbacks from an unsuccessful situation directly to the cause of the failure. Thus, the transition-oriented engine deterministically backjumps to the source of failure when a transition, whose guard depends on previously set registers, cannot be traversed. Next it modifies the EFSM configuration to satisfy the condition on registers and successfully comes back to the target state to activate the transition. The transition-oriented engine generally allows us to achieve 100% transition coverage. However, 100% transition coverage does not guarantee to explore all DUV corner cases, thus some hard-to-detect (HTD) faults can escape detection preventing the achievement of 100% fault coverage. Therefore, the CLP-based fault-oriented engine is finally applied to focus on the remaining HTD faults. The CLP solver is used to deterministically search for sequences that propagate the HTD faults observed, but not detected, by the random and the transition-oriented engine. The fault-oriented engine needs a CLP-based representation of the DUV, and some searching functions to generate test sequences. The CLP-based representation is automatically derived from the S2EFSM models according to the defined rules, which follow the syntax of the ECLiPSe CLP solver. This is not a trivial task, since modeling the evolution in time of an EFSM by using logic constraints is really different with respect to model the same behavior by means of a traditional HW description language. At first, the concept of time steps is introduced, required to model the SSEFSM evolution through the time via CLP. Then, this study deals with modeling of logical variables and constraints to represent enabling functions and update functions of the SSEFSM. Formal tools that exhaustively search for a solution frequently run out of resources when the state space to be analyzed is too large. The same happens for the CLP solver, when it is asked to find a propagation sequence on large sequential designs. Therefore we have defined a set of strategies that allow to prune the search space and to manage the complexity problem for the solver

    Desynchronization: Synthesis of asynchronous circuits from synchronous specifications

    Get PDF
    Asynchronous implementation techniques, which measure logic delays at run time and activate registers accordingly, are inherently more robust than their synchronous counterparts, which estimate worst-case delays at design time, and constrain the clock cycle accordingly. De-synchronization is a new paradigm to automate the design of asynchronous circuits from synchronous specifications, thus permitting widespread adoption of asynchronicity, without requiring special design skills or tools. In this paper, we first of all study different protocols for de-synchronization and formally prove their correctness, using techniques originally developed for distributed deployment of synchronous language specifications. We also provide a taxonomy of existing protocols for asynchronous latch controllers, covering in particular the four-phase handshake protocols devised in the literature for micro-pipelines. We then propose a new controller which exhibits provably maximal concurrency, and analyze the performance of desynchronized circuits with respect to the original synchronous optimized implementation. We finally prove the feasibility and effectiveness of our approach, by showing its application to a set of real designs, including a complete implementation of the DLX microprocessor architectur

    Timing verification of dynamically reconfigurable logic for Xilinx Virtex FPGA series

    Get PDF
    This paper reports on a method for extending existing VHDL design and verification software available for the Xilinx Virtex series of FPGAs. It allows the designer to apply standard hardware design and verification tools to the design of dynamically reconfigurable logic (DRL). The technique involves the conversion of a dynamic design into multiple static designs, suitable for input to standard synthesis and APR tools. For timing and functional verification after APR, the sections of the design can then be recombined into a single dynamic system. The technique has been automated by extending an existing DRL design tool named DCSTech, which is part of the Dynamic Circuit Switching (DCS) CAD framework. The principles behind the tools are generic and should be readily extensible to other architectures and CAD toolsets. Implementation of the dynamic system involves the production of partial configuration bitstreams to load sections of circuitry. The process of creating such bitstreams, the final stage of our design flow, is summarized

    High-Level Synthesis for Embedded Systems

    Get PDF

    Clafer: Lightweight Modeling of Structure, Behaviour, and Variability

    Get PDF
    Embedded software is growing fast in size and complexity, leading to intimate mixture of complex architectures and complex control. Consequently, software specification requires modeling both structures and behaviour of systems. Unfortunately, existing languages do not integrate these aspects well, usually prioritizing one of them. It is common to develop a separate language for each of these facets. In this paper, we contribute Clafer: a small language that attempts to tackle this challenge. It combines rich structural modeling with state of the art behavioural formalisms. We are not aware of any other modeling language that seamlessly combines these facets common to system and software modeling. We show how Clafer, in a single unified syntax and semantics, allows capturing feature models (variability), component models, discrete control models (automata) and variability encompassing all these aspects. The language is built on top of first order logic with quantifiers over basic entities (for modeling structures) combined with linear temporal logic (for modeling behaviour). On top of this semantic foundation we build a simple but expressive syntax, enriched with carefully selected syntactic expansions that cover hierarchical modeling, associations, automata, scenarios, and Dwyer's property patterns. We evaluate Clafer using a power window case study, and comparing it against other notations that substantially overlap with its scope (SysML, AADL, Temporal OCL and Live Sequence Charts), discussing benefits and perils of using a single notation for the purpose
    • …
    corecore