108,266 research outputs found
Combining high performance simulation, data acquisition, and graphics display computers
Issues involved in the continuing development of an advanced simulation complex are discussed. This approach provides the capability to perform the majority of tests on advanced systems, non-destructively. The controlled test environments can be replicated to examine the response of the systems under test to alternative treatments of the system control design, or test the function and qualification of specific hardware. Field tests verify that the elements simulated in the laboratories are sufficient. The digital computer is hosted by a Digital Equipment Corp. MicroVAX computer with an Aptec Computer Systems Model 24 I/O computer performing the communication function. An Applied Dynamics International AD100 performs the high speed simulation computing and an Evans and Sutherland PS350 performs on-line graphics display. A Scientific Computer Systems SCS40 acts as a high performance FORTRAN program processor to support the complex, by generating numerous large files from programs coded in FORTRAN that are required for the real time processing. Four programming languages are involved in the process, FORTRAN, ADSIM, ADRIO, and STAPLE. FORTRAN is employed on the MicroVAX host to initialize and terminate the simulation runs on the system. The generation of the data files on the SCS40 also is performed with FORTRAN programs. ADSIM and ADIRO are used to program the processing elements of the AD100 and its IOCP processor. STAPLE is used to program the Aptec DIP and DIA processors
Recommended from our members
HDL slicing for verification and test
textThe semiconductor industry has been increasingly relying on computer-aided design (CAD) tools in order to meet its demand for high performance and stringent time-to-market requirements. However, practical application of state-of-the-art CAD tools is severely limited by the sheer size of the design sizes. Therefore,
an appropriate methodology that exploits the inherent modular structure within the
complex designs, is desired. This dissertation proposes such a methodology that
is useful with a variety of CAD tools in design verification and manufacturing test
generation.
Functional test generation using sequential automatic test pattern generation
(ATPG) tools is extremely computation intensive and produces acceptable results
only on relatively small designs. Therefore, hierarchical approaches are necessary
to reduce the ATPG complexity. A promising approach was previously proposed
in which individual modules in a design are targeted one at a time, using an ad-hoc
abstraction for the reminder of the design derived from its register-transfer level (RTL) model. Based on this approach, an elegant and a systematic approach based
on âprogram slicingâ, that allows it to be scalable for large designs, is developed.
The theoretical basis for applying program slicing on hardware description languages (HDLs) is established, and a tool called FACTOR has been implemented to
automate the approach for test generation and testability analysis.
Design verification requires exploring the complete design space to ensure
the correctness of the design. A proof-by-contradiction approach called bounded
model checking (BMC) has been proposed, which utilizes satisfiability (SAT) capabilities to find counterexamples for temporal properties within a specified number of
time steps. The proposed scheme harnesses the power of sequential-ATPG tools to
use structural information of a hardware design, to perform BMC more efficiently.
This approach has been further augmented by the HDL slicing methodology for test
generation, to accelerate the verification methodology.
Symbolic simulation uses symbols rather than actual values for simulating
a hardware design, so that the responses to a class of values can be computed and
checked for correctness in a single run. The effectiveness of this approach has been
incorporated into a powerful verification methodology, called symbolic trajectory
evaluation (STE), to verify properties of bounded state sequences, intermixed with
properties of invariant behavior. Assertions are described in a limited form of temporal logic and are symbolically validated against the design under verification. The
HDL slicing tool, FACTOR, has been appropriately applied to speed up the verification of the floating point adder-subtractor unit of the Pentium 4 design in Intelâs
Forte verification framework.Electrical and Computer Engineerin
TriCheck: Memory Model Verification at the Trisection of Software, Hardware, and ISA
Memory consistency models (MCMs) which govern inter-module interactions in a
shared memory system, are a significant, yet often under-appreciated, aspect of
system design. MCMs are defined at the various layers of the hardware-software
stack, requiring thoroughly verified specifications, compilers, and
implementations at the interfaces between layers. Current verification
techniques evaluate segments of the system stack in isolation, such as proving
compiler mappings from a high-level language (HLL) to an ISA or proving
validity of a microarchitectural implementation of an ISA.
This paper makes a case for full-stack MCM verification and provides a
toolflow, TriCheck, capable of verifying that the HLL, compiler, ISA, and
implementation collectively uphold MCM requirements. The work showcases
TriCheck's ability to evaluate a proposed ISA MCM in order to ensure that each
layer and each mapping is correct and complete. Specifically, we apply TriCheck
to the open source RISC-V ISA, seeking to verify accurate, efficient, and legal
compilations from C11. We uncover under-specifications and potential
inefficiencies in the current RISC-V ISA documentation and identify possible
solutions for each. As an example, we find that a RISC-V-compliant
microarchitecture allows 144 outcomes forbidden by C11 to be observed out of
1,701 litmus tests examined. Overall, this paper demonstrates the necessity of
full-stack verification for detecting MCM-related bugs in the hardware-software
stack.Comment: Proceedings of the Twenty-Second International Conference on
Architectural Support for Programming Languages and Operating System
Applying Formal Methods to Networking: Theory, Techniques and Applications
Despite its great importance, modern network infrastructure is remarkable for
the lack of rigor in its engineering. The Internet which began as a research
experiment was never designed to handle the users and applications it hosts
today. The lack of formalization of the Internet architecture meant limited
abstractions and modularity, especially for the control and management planes,
thus requiring for every new need a new protocol built from scratch. This led
to an unwieldy ossified Internet architecture resistant to any attempts at
formal verification, and an Internet culture where expediency and pragmatism
are favored over formal correctness. Fortunately, recent work in the space of
clean slate Internet design---especially, the software defined networking (SDN)
paradigm---offers the Internet community another chance to develop the right
kind of architecture and abstractions. This has also led to a great resurgence
in interest of applying formal methods to specification, verification, and
synthesis of networking protocols and applications. In this paper, we present a
self-contained tutorial of the formidable amount of work that has been done in
formal methods, and present a survey of its applications to networking.Comment: 30 pages, submitted to IEEE Communications Surveys and Tutorial
Digital signal processing: the impact of convergence on education, society and design flow
Design and development of real-time, memory and processor hungry digital signal processing systems has for decades been accomplished on general-purpose microprocessors. Increasing needs for high-performance DSP systems made these microprocessors unattractive for such implementations. Various attempts to improve the performance of these systems resulted in the use of dedicated digital signal processing devices like DSP processors and the former heavyweight champion of electronics design â Application Specific Integrated Circuits.
The advent of RAM-based Field Programmable Gate Arrays has changed the DSP design flow. Software algorithmic designers can now take their DSP algorithms right from inception to hardware implementation, thanks to the increasing availability of software/hardware design flow or hardware/software co-design. This has led to a demand in the industry for graduates with good skills in both Electrical Engineering and Computer Science. This paper evaluates the impact of technology on DSP-based designs, hardware design languages, and how graduate/undergraduate courses have changed to suit this transition
Trojans in Early Design StepsâAn Emerging Threat
Hardware Trojans inserted by malicious foundries
during integrated circuit manufacturing have received substantial
attention in recent years. In this paper, we focus on a different
type of hardware Trojan threats: attacks in the early steps of
design process. We show that third-party intellectual property
cores and CAD tools constitute realistic attack surfaces and that
even system specification can be targeted by adversaries. We
discuss the devastating damage potential of such attacks, the
applicable countermeasures against them and their deficiencies
- âŠ