45 research outputs found

    Evaluation of SystemVerilog and Constrained Random Verification for Digital Designs

    Get PDF
    The Field Programmable Gate Array is a device that consists of configurable logic, on chip-memory and often functionality that is fixed into silicon, such as hardware multipliers or transceivers for peripheral communication. The Field Programmable Gate Array is a configurable and reprogrammable digital circuit that can be designed to implement functionality that is deterministic and parallel. Due to these qualities the Field Programmable Gate Array is also an energy efficient choice of hardware for many digital designs. Technological advancements in the semiconductor industry have resulted in a continuous decrease in transistor sizes, which in turn has allowed for an increased transistor density in semiconductor devices. For Field Programmable Gate Arrays the development has led to increasingly complex designs, as an increased amount of programmable resources are available. Consequently, the time and effort spent in verification of the digital designs has increased. Discovering design flaws before end products are released is not only economically crucial, but a failure to do so might damage the reputation of a company. In this thesis current verification trends are evaluated in three case studies made for Danfoss Drives. The objective of the thesis is to determine whether the quality of testing of Field Programmable Gate Array designs at the company can be improved by using Constrained Random Verification and the SystemVerilog language. Test benches for behavioral simulations are built using SystemVerilog in conjunction with the Universal Verification Methodology. Constrained Random Verification based on the SystemVerilog language and the Universal Verification Methodology was evaluated for unit level testing in two case studies for one Intellectual Property core. A third case study was made for a system level design of multiple Intellectual Property cores as a SystemVerilog test bench utilizing neither constrained randomization nor the Universal Verification Methodology. Improved test coverage and an increased degree of automatization was achieved for the unit level testing, although at the cost of an increased verification effort. In the system level testing the capabilities of the SystemVerilog language proved beneficial, especially for creating transaction level stimulus, writing self-checking mechanisms for the test bench and modularizing its structure.fi=Opinnäytetyö kokotekstinä PDF-muodossa.|en=Thesis fulltext in PDF format.|sv=Lärdomsprov tillgängligt som fulltext i PDF-format

    Reining in the Functional Verification of Complex Processor Designs with Automation, Prioritization, and Approximation

    Full text link
    Our quest for faster and efficient computing devices has led us to processor designs with enormous complexity. As a result, functional verification, which is the process of ascertaining the correctness of a processor design, takes up a lion's share of the time and cost spent on making processors. Unfortunately, functional verification is only a best-effort process that cannot completely guarantee the correctness of a design, often resulting in defective products that may have devastating consequences.Functional verification, as practiced today, is unable to cope with the complexity of current and future processor designs. In this dissertation, we identify extensive automation as the essential step towards scalable functional verification of complex processor designs. Moreover, recognizing that a complete guarantee of design correctness is impossible, we argue for systematic prioritization and prudent approximation to realize fast and far-reaching functional verification solutions. We partition the functional verification effort into three major activities: planning and test generation, test execution and bug detection, and bug diagnosis. Employing a perspective we refer to as the automation, prioritization, and approximation (APA) approach, we develop solutions that tackle challenges across these three major activities. In pursuit of efficient planning and test generation for modern systems-on-chips, we develop an automated process for identifying high-priority design aspects for verification. In addition, we enable the creation of compact test programs, which, in our experiments, were up to 11 times smaller than what would otherwise be available at the beginning of the verification effort. To tackle challenges in test execution and bug detection, we develop a group of solutions that enable the deployment of automatic and robust mechanisms for catching design flaws during high-speed functional verification. By trading accuracy for speed, these solutions allow us to unleash functional verification platforms that are over three orders of magnitude faster than traditional platforms, unearthing design flaws that are otherwise impossible to reach. Finally, we address challenges in bug diagnosis through a solution that fully automates the process of pinpointing flawed design components after detecting an error. Our solution, which identifies flawed design units with over 70% accuracy, eliminates weeks of diagnosis effort for every detected error.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/137057/1/birukw_1.pd

    Validation and verification of the interconnection of hardware intellectual property blocks for FPGA-based packet processing systems

    Get PDF
    As networks become more versatile, the computational requirement for supporting additional functionality increases. The increasing demands of these networks can be met by Field Programmable Gate Arrays (FPGA), which are an increasingly popular technology for implementing packet processing systems. The fine-grained parallelism and density of these devices can be exploited to meet the computational requirements and implement complex systems on a single chip. However, the increasing complexity of FPGA-based systems makes them susceptible to errors and difficult to test and debug. To tackle the complexity of modern designs, system-level languages have been developed to provide abstractions suited to the domain of the target system. Unfortunately, the lack of formality in these languages can give rise to errors that are not caught until late in the design cycle. This thesis presents three techniques for verifying and validating FPGA-based packet processing systems described in a system-level description language. First, a type system is applied to the system description language to detect errors before implementation. Second, system-level transaction monitoring is used to observe high-level events on-chip following implementation. Third, the high-level information embodied in the system description language is exploited to allow the system to be automatically instrumented for on-chip monitoring. This thesis demonstrates that these techniques catch errors which are undetected by traditional verification and validation tools. The locations of faults are specified and errors are caught earlier in the design flow, which saves time by reducing synthesis iterations

    Towards Automated Security Validation for Hardware Designs

    Get PDF
    Hardware provides the foundation of trust for computer systems. Defects in hardware designs routinely cause vulnerabilities that are exploitable by malicious software and compromise the security of the entire system. While mature hardware validation tools exist, they were primarily designed for checking functional correctness. How to systematically detect security-critical defects remains an open and challenging question.In this dissertation, I develop formal methods and practical tools for automated hardware security validation. To identify and develop security-critical properties for hardware design, I developed SCIFinder, a methodology that leverages known vulnerabilities to mine and learn security invariants. I show that security vulnerabilities together with machine learning techniques can give us a set of security properties to detect both known and unknown security bugs in the OR1200 processor. I also proposed another method to develop security-critical properties by leveraging existing ones, and I built a tool, Transys, to translate security properties across similar or different versions of hardware designs. I demonstrate that translating security properties across AES hardware, RSA hardware and RISC processors is feasible and light-weight. Given the security properties, I developed Coppelia to validate the security of hardware designs. I proposed a hardware-oriented backward symbolic execution strategy to find violations and generate exploit programs. I successfully generate exploits for known security bugs on the OR1200 processor, and discovered and generated exploit programs for 4 unknown bugs across two different processors and architectures.Doctor of Philosoph

    Behaviour analysis in binary SoC data

    Get PDF

    Digital Centric Multi-Gigabit SerDes Design and Verification

    Get PDF
    Advances in semiconductor manufacturing still lead to ever decreasing feature sizes and constantly allow higher degrees of integration in application specific integrated circuits (ASICs). Therefore the bandwidth requirements on the external interfaces of such systems on chips (SoC) are steadily growing. Yet, as the number of pins on these ASICs is not increasing in the same pace - known as pin limitation - the bandwidth per pin has to be increased. SerDes (Serializer/Deserializer) technology, which allows to transfer data serially at very high data rates of 25Gbps and more is a key technology to overcome pin limitation and exploit the computing power that can be achieved in todays SoCs. As such SerDes blocks together with the digital logic interfacing them form complex mixed signal systems, verification of performance and functional correctness is very challenging. In this thesis a novel mixed-signal design methodology is proposed, which tightly couples model and implementation in order to ensure consistency throughout the design cycles and hereby accelerate the overall implementation flow. A tool flow that has been developed is presented, which integrates well into state of the art electronic design automation (EDA) environments and enables the usage of this methodology in practice. Further, the design space of todays high-speed serial links is analyzed and an architecture is proposed, which pushes complexity into the digital domain in order to achieve robustness, portability between manufacturing processes and scaling with advanced node technologies. The all digital phase locked loop (PLL) and clock data recovery (CDR), which have been developed are described in detail. The developed design flow was used for the implementation of the SerDes architecture in a 28nm silicon process and proved to be indispensable for future projects

    Coverage of Compositional Property Sets for Hardware and Hardware-dependent Software in Formal System-on-Chip Verification

    Get PDF
    Divide-and-Conquer is a common strategy to manage the complexity of system design and verification. In the context of System-on-Chip (SoC) design verification, an SoC system is decomposed into several modules and every module is separately verified. Usually an SoC module is reactive: it interacts with its environmental modules. This interaction is normally modeled by environment constraints, which are applied to verify the SoC module. Environment constraints are assumed to be always true when verifying the individual modules of a system. Therefore the correctness of environment constraints is very important for module verification. Environment constraints are also very important for coverage analysis. Coverage analysis in formal verification measures whether or not the property set fully describes the functional behavior of the design under verification (DuV). if a set of properties describes every functional behavior of a DuV, the set of properties is called complete. To verify the correctness of environment constraints, Assume-Guarantee Reasoning rules can be employed. However, the state of the art assume-guarantee reasoning rules cannot be applied to the environment constraints specified by using an industrial standard property language such as SystemVerilog Assertions (SVA). This thesis proposes a new assume-guarantee reasoning rule that can be applied to environment constraints specified by using a property language such as SVA. In addition, this thesis proposes two efficient plausibility checks for constraints that can be conducted without a concrete implementation of the considered environment. Furthermore, this thesis provides a compositional reasoning framework determining that a system is completely verified if all modules are verified with Complete Interval Property Checking (C-IPC) under environment constraints. At present, there is a trend that more of the functionality in SoCs is shifted from the hardware to the hardware-dependent software (HWDS), which is a crucial component in an SoC, since other software layers, such as the operating systems are built on it. Therefore there is an increasing need to apply formal verification to HWDS, especially for safety-critical systems. The interactions between HW and HWDS are often reactive, and happen in a temporal order. This requires new property languages to specify the reactive behavior at the HW and SW interfaces. This thesis introduces a new property language, called Reactive Software Property Language (RSPL), to specify the reactive interactions between the HW and the HWDS. Furthermore, a method for checking the completeness of software properties, which are specified by using RSPL, is presented in this thesis. This method is motivated by the approach of checking the completeness of hardware properties

    Synthèse automatique de circuits numériques à partir de spécifications temporelles

    Get PDF
    The work presented in this thesis aims at automatically prototype communication and control designs from declarative temporal specifications. From a set of PSL properties, we produce a synthesizable RTL design automatically. The proposed method is modular, in contrast to previously published methods that were based on automata theory. From each property, we produce a component that observes some operands and generates waveforms for the other operands: the reactant. First, a library of primitive reactants has been provided for FL and SERE operators. To this goal, a dependency relation is defined for each operator that expresses the dependency among its operands using the operator's semantics. Then, the dependency relation of each operator is interpreted as a hardware component that implements the operator: the operator's primitive reactant. Using this formalization, a method is proposed to automatically decide which signals of a property are observed and which are generated. In the cases when specifying the signal direction is not possible, a solver is implemented to identify the signal value. In addition, the way of identifying the value of the signal that is generated in several properties is addressed. The final circuit is the interconnection of the properties' reactants and solvers. A prototype tool SyntHorus2, which is an extension to HORUS, has been developed. It takes PSL properties as its inputs, and generates the synthesizable VHDL code of the circuit. In addition, it generates some complementary properties to verify if the set of specification is coherent and complete. The method is efficient, and synthesizes control circuits in a few seconds. Results obtained on classical benchmarks show that our technique compiles properties more efficiently than previous prototype tools.Les travaux présentés dans cette thèse visent à produire automatiquement des prototypes de circuits de communication et de contrôle à partir de spécifications temporelles déclaratives. Partant d'un ensemble de propriétés écrites en langage PSL, nous produisons un modèle RTL synthétisable automatiquement. La méthode proposée est modulaire, contrairement aux méthodes publiées antérieurement qui étaient fondées sur la théorie des automates. Pour chaque propriété, nous produisons un composant qui observe certains opérandes et génère des chronogrammes pour les autres opérandes : le module réactif. Tout d'abord, une bibliothèque des modules réactifs primitifs a été développée pour les opérateurs FL et SERE. Pour ce faire, une relation de dépendance a été définie pour chaque opérateur : fondée sur la sémantique de l'opérateur, elle exprime la dépendance entre ses opérandes. Ensuite, la relation de dépendance de chaque opérateur est interprétée comme un composant matériel qui met en œuvre l'opérateur : c'est le module réactif primitif de l'opérateur. À l'aide de cette formalisation, nous proposons une méthode pour déterminer automatiquement quels signaux d'une propriété sont observés et lesquels sont générés. Dans le cas où il n'est pas possible de déterminer le sens du signal, un solveur est ajouté pour identifier la valeur du signal. Le solveur sert aussi à déterminer la valeur d'un signal généré par plusieurs propriétés. Le circuit final est l'interconnexion des modules réactifs et des solveurs pour l'ensemble des propriétés. Un outil prototype, SyntHorus2, qui est une extension d'HORUS, a été mis développé. Il prend les propriétés PSL comme entrées et génère le code VHDL synthétisable du circuit. En outre, il génère des propriétés complémentaires pour vérifier si l'ensemble des spécifications est cohérent et complet. La méthode est efficace et synthétise des circuits de commande en quelques secondes. Les résultats que nous avons obtenus sur des jeux d'essais classiques montrent que notre technique compile les propriétés plus efficacement que les outils prototypes qui l'ont précédée

    Guarded atomic actions and refinement in a system-on-chip development flow: bridging the specification gap with Event-B

    No full text
    Modern System-on-chip (SoC) hardware design puts considerable pressure on existing design and verification flows, languages and tools. The Register Transfer Level (RTL)description, which forms the input for synchronous, logic synthesis-driven design is at too low a level of abstraction for efficient architectural exploration and re-use. The existing methods for taking a high-level paper specification and refining this specification to an implementation that meets its performance criteria is largely manual and error-prone and as RTL descriptions get larger, a systematic design method is necessary to address explicitly the timing issues that arise when applying logic synthesis to such large blocks.Guarded Atomic Actions have been shown to offer a convenient notation for describing microarchitectures that is amenable to formal reasoning and high-level synthesis. Event-B is a language and method that supports the development of specifications with automatic proof and refinement, based on guarded atomic actions. Latency-insensitive design ensures that a design composed of functionally correct components will be independent of communication latency. A method has been developed which uses Event-B for latency-insensitive SoC component and sub-system design which can be combined with high-level, component synthesis to enable architectural exploration and re-use at the specification level and to close the specification gap in the SoC hardware flow
    corecore