668 research outputs found
Recommended from our members
Testability considerations for implementing an embedded memory subsystem
textThere are a number of testability considerations for VLSI design,
but test coverage, test time, accuracy of test patterns and
correctness of design information for DFD (Design for debug) are
the most important ones in design with embedded memories. The goal
of DFT (Design-for-Test) is to achieve zero defects. When it comes
to the memory subsystem in SOCs (system on chips), many flavors of
memory BIST (built-in self test) are able to get high test
coverage in a memory, but often, no proper attention is given to
the memory interface logic (shadow logic). Functional testing and
BIST are the most prevalent tests for this logic, but functional
testing is impractical for complicated SOC designs. As a result,
industry has widely used at-speed scan testing to detect delay
induced defects. Compared with functional testing, scan-based
testing for delay faults reduces overall pattern generation
complexity and cost by enhancing both controllability and
observability of flip-flops. However, without proper modeling of
memory, Xs are generated from memories. Also, when the design has
chip compression logic, the number of ATPG patterns is increased
significantly due to Xs from memories. In this dissertation, a
register based testing method and X prevention logic are presented
to tackle these problems.
An important design stage for scan based testing with memory
subsystems is the step to create a gate level model and verify
with this model. The flow needs to provide a robust ATPG netlist
model. Most industry standard CAD tools used to analyze fault
coverage and generate test vectors require gate level models.
However, custom embedded memories are typically designed using a
transistor-level flow, there is a need for an abstraction step to
generate the gate models, which must be equivalent to the actual
design (transistor level). The contribution of the research is a
framework to verify that the gate level representation of custom
designs is equivalent to the transistor-level design.
Compared to basic stuck-at fault testing, the number of patterns
for at-speed testing is much larger than for basic stuck-at fault
testing. So reducing test and data volume are important. In this
desertion, a new scan reordering method is introduced to reduce
test data with an optimal routing solution. With in depth
understanding of embedded memories and flows developed during the
study of custom memory DFT, a custom embedded memory Bit Mapping
method using a symbolic simulator is presented in the last chapter
to achieve high yield for memories.Electrical and Computer Engineerin
The Design of a Custom 32-bit RISC CPU and LLVM Compiler Backend
Compiler infrastructures are often an area of high interest for research. As the necessity for digital information and technology increases, so does the need for an increase in the performance of digital hardware. The main component in most complex digital systems is the central processing unit (CPU). Compilers are responsible for translating code written in a high-level programming language to a sequence of instructions that is then executed by the CPU. Most research in compiler technologies is focused on the design and optimization of the code written by the programmer; however, at some point in this process the code must be converted to instructions specific to the CPU. This paper presents the design of a simplified CPU architecture as well as the less understood side of compilers: the backend, which is responsible for the CPU instruction generation. The CPU design is a 32-bit reduced instruction set computer (RISC) and is written in Verilog. Unlike most embedded-style RISC architectures, which have a compiler port for GCC (The GNU Compiler Collection), this compiler backend was written for the LLVM compiler infrastructure project. Code generated from the LLVM backend is successfully simulated on the custom CPU with Cadence Incisive, and the CPU is synthesized using Synopsys Design Compiler
Focal Spot, Fall/Winter 2002/2003
https://digitalcommons.wustl.edu/focal_spot_archives/1092/thumbnail.jp
Recommended from our members
Behavioral synthesis from VHDL using structured modeling
This dissertation describes work in behavioral synthesis involving the development of a VHDL Synthesis System VSS which accepts a VHDL behavioral input specification and performs technology independent synthesis to generate a circuit netlist of generic components. The VHDL language is used for input and output descriptions. An intermediate representation which incorporates signal typing and component attributes simplifies compilation and facilitates design optimization.A Structured Modeling methodology has been developed to suggest standard VHDL modeling practices for synthesis. Structured modeling provides recommendations for the use of available VHDL description styles so that optimal designs will be synthesized.A design composed of generic components is synthesized from the input description through a process of Graph Compilation, Graph Criticism, and Design Compilation. Experiments were performed to demonstrate the effects of different modeling styles on the quality of the design produced by VSS. Several alternative VHDL models were examined for each benchmark, illustrating the improvements in design quality achieved when Structured Modeling guidelines were followed
High-level synthesis design of scalable ultrafast ultrasound beamformer with single FPGA
Ultrafast ultrasound imaging is essential for advanced ultrasound imaging
techniques such as ultrasound localization microscopy (ULM) and functional
ultrasound (fUS). Current ultrafast ultrasound imaging is challenged by the
ultrahigh data bandwidth associated with the radio frequency (RF) signal, and
by the latency of the computationally expensive beamforming process. As such,
continuous ultrafast data acquisition and beamforming remain elusive with
existing software beamformers based on CPUs or GPUs. To address these
challenges, the proposed work introduces a novel method of implementing an
ultrafast ultrasound beamformer specifically for ultrafast plane wave imaging
(PWI) on a field programmable gate array (FPGA) by using high-level synthesis.
A parallelized implementation of the beamformer on a single FPGA was proposed
by 1) utilizing a delay compression technique to reduce the delay profile size,
which enables both run-time pre-calculated delay profile loading from external
memory and delay reuse 2) vectorizing channel data fetching which is enabled by
delay reuse, and 3) using fixed summing networks to reduce consumption of logic
resources. Our proposed method presents two unique advantages over current FPGA
beamformers: 1) high scalability that allows fast adaptation to different FPGA
resources and beamforming speed demands by using Xilinx High-Level Synthesis as
the development tool, and 2) allow a compact form factor design by using a
single FPGA to complete the beamforming instead of multiple FPGAs. With the
proposed method, a sustainable average beamforming rate of 4.83 G
samples/second in terms of input raw RF sample was achieved. The resulting
image quality of the proposed beamformer was compared with the software
beamformer on the Verasonics Vantage system for both phantom imaging and in
vivo imaging of a mouse brain
VirtualScan: a new compressed scan technology for test cost reduction
This work describes the VirtualScan technology for scan test cost reduction. Scan chains in a VirtualScan circuit are split into shorter ones and the gap between external scan ports and internal scan chains are bridged with a broadcaster and a compactor. Test patterns for a VirtualScan circuit are generated directly by one-pass VirtualScan ATPG, in which multi-capture clocking and maximum test compaction are supported. In addition, VirtualScan ATPG avoids unknown-value and aliasing effects algorithmically without adding any additional circuitry. The VirtualScan technology has achieved successful tape-outs of industrial chips and has been proven to be an efficient and easy-to-implement solution for scan test cost reduction.2004 International Conference on Test, 26-28 October 2004, Charlotte, NC, USA, US
Recommended from our members
Statistical methods for rapid system evaluation under transient and permanent faults
textTraditional solutions for test and reliability do not scale well for modern designs with their size and complexity increasing with every technology generation. Therefore, in order to meet time-to-market requirements as well as acceptable product quality, it is imperative that new methodologies be developed for quickly evaluating a system in the presence of faults. In this research, statistical methods have been employed and implemented to 1) estimate the stuck-at fault coverage of a test sequence and evaluate the given test vector set without the need for complete fault simulation, and 2) analyze design vulnerabilities in the presence of radiation-based (soft) errors. Experimental results show that these statistical techniques can evaluate a system under test orders of magnitude faster than state-of-the-art methods with a small margin of error. In this dissertation, I have introduced novel methodologies that utilize the information from fault-free simulation and partial fault simulation to predict the fault coverage of a long sequence of test vectors for a design under test. These methodologies are practical for functional testing of complex designs under a long sequence of test vectors. Industry is currently seeking efficient solutions for this challenging problem. The last part of this dissertation discusses a statistical methodology for a detailed vulnerability analysis of systems under soft errors. This methodology works orders of magnitude faster than traditional fault injection. In addition, it is shown that the vulnerability factors calculated by this method are closer to complete fault injection (which is the ideal way of soft error vulnerability analysis), compared to statistical fault injection. Performing such a fast soft error vulnerability analysis is very cruicial for companies that design and build safety-critical systems.Electrical and Computer Engineerin
Automated Exploration of the ASIC Design Space for Minimum Power-Delay-Area Product at the Register Transfer Level
Exploring the integrated circuit design space for minimum power-delay-area (PDA) product can be time-consuming and tedious, especially when the target standard-cell library has hundreds of options. In this dissertation, heuristic algorithms that automate this process have been developed, implemented and validated at the reg- ister transfer level. In some cases, the PDA product was 1.9 times better than the initial baseline solution. The parallel search algorithm exhibited 9x speed up when executed on 10 machines simultaneously. These two new methods also characterize the design space for the given RTL code by generating power-delay-area points in addition to the minimum PDA point in case the designer wishes to select a different solution that is a tradeoff among these metrics. As a final step, these two search algorithms are integrated into a fully automated ASIC design flow
seihin kaihatsu ni okeru sekkei fuka to sono teigen : sekkei purosesu no koritsuka to kaizen ni kansuru kenkyu
制度:新 ; 文部省報告番号:甲2267号 ; 学位の種類:博士(学術) ; 授与年月日:2006/9/15 ; 早大学位記番号:新429
- …