4,525 research outputs found

    Test Slice Difference Technique for Low-Transition Test Data Compression

    Get PDF
    [[notice]]補正完畢[[incitationindex]]EI[[booktype]]電子

    An innovative two-stage data compression scheme using adaptive block merging technique

    Get PDF
    Test data has increased enormously owing to the rising on-chip complexity of integrated circuits. It further increases the test data transportation time and tester memory. The non-correlated test bits increase the issue of the test power. This paper presents a two-stage block merging based test data minimization scheme which reduces the test bits, test time and test power. A test data is partitioned into blocks of fixed sizes which are compressed using two-stage encoding technique. In stage one, successive blocks are merged to retain a representative block. In stage two, the retained pattern block is further encoding based on the existence of ten different subcases between the sub-block formed by splitting the retained pattern block into two halves. Non-compatible blocks are also split into two sub-blocks and tried for encoded using lesser bits. Decompression architecture to retrieve the original test data is presented. Simulation results obtained corresponding to different ISCAS′89 benchmarks circuits reflect its effectiveness in achieving better compression

    Test Stimuli Segmentation and Coding Method

    Get PDF
    Test vector coding and data transmission are the key technologies in the design-for-test of digital integrated circuits (IC). Existing parallel input methods of test stimuli can reduce test application times; however, they need to occupy multiple input ports. Thus, a novel method of test stimuli coding and data transmission was proposed to reduce the test application time of the test vectors and reduce the number of input ports required for the parallel input of test stimuli. This method was based on the segmentation of test stimuli. First, the test stimuli were evenly segmented into eight-bit wide. Second, the eight-bit data of each segment were encoded to the five-bit data according to the compatibility between the test data of each segment. The eight-bit test stimuli input can be completed in one or two clock cycles of automatic test equipment (ATE) by using the five input ports of the chip. The corresponding decoding circuit was added inside the netlist of the circuit to realize the rapid input of the test stimuli. Lastly, the ISCAS\u2789 benchmark circuit was used to conduct experiments, results of this coding method were then compared with those of the serial input method. Results show that the encoding method proposed in this study can save an average of 37% of the parallel input data width and 81.7% of the test stimuli input time. The proposed method in this study can also reduce the test application time and the cost of the IC test. The findings of this study can provide guidance for improving the scan testing method of digital IC

    Methods for FPGA pre-processing of data for the ALOFT readout system

    Get PDF
    In 2017 the Airborne Lightning Observatory for FEGS & TGFs (ALOFT) campaign was completed with the goal of studying thundercloud related high energy phenomena, namely Terrestrial Gamma-Ray Flashes (TGFs) and Gamma-ray Glows, and their connections. This was done on a high-altitude airplane flying over the thunderclouds. The campaign observed two gamma-ray glows but no TGFs. A new ALOFT campaign has been confirmed for 2023 and will contain several upgrades and improvements. Some of these improvements includes developing two new detectors that will be added to the instrument, as well as a new readout system based around a ZYNQ-7000 series System on Chip (SoC). Through this thesis, my contribution to this development has been two-folded: 1. Verify that the integrated ZYNQ Serial Peripheral Interface (SPI) controller can be used to interface and configure the Analog to Digital Converters (ADCs) in the new detectors. This involves a) making a converter between the four wire SPI used by the ZYNQ and the three wire SPI used by the ADCs, b) modelling the behaviour of how the ADCs control registers communicate, and c) verify if the ZYNQ SPI controller can transmit the protocols required to configure the ADCs, and that it can access the FPGA part of the ZYNQ SoC. 2. During TGFs the data output of the new detectors far exceeds the ability of the system to send all the raw data to storage, requiring the data to be temporarily buffered. The buffer capacity needed to guarantee that no data is lost, would consume 73% of the total shared buffer capacity available to the FPGA part of the system. This leaves very little available capacity for the remaining detectors and any FPGA modules. In this thesis different approaches to reduce the required buffer requirements has been explored. My work in this thesis shows that: 1. The ZYNQ SPI controller can be used to configure the ADCs by a) showing that the required converter can easily be made in the FPGA part of the ZYNQ, by b) modelling how the ADCs are configured can be created, and by c) testing that the ZYNQ SPI controller is compatible with the protocols used to interact and configure the ADCs. 2. After exploring both real-time analysis of the data on the SoC and compressing the raw data, the safest methods to use is compression. It is also shown that with the compression explored and considered, it is no problem reducing the buffer requirement from 73% to less than 30% of the total shared capacity.Masteroppgåve i fysikkPHYS399MAMN-PHY

    Gbit/second lossless data compression hardware

    Get PDF
    This thesis investigates how to improve the performance of lossless data compression hardware as a tool to reduce the cost per bit stored in a computer system or transmitted over a communication network. Lossless data compression allows the exact reconstruction of the original data after decompression. Its deployment in some high-bandwidth applications has been hampered due to performance limitations in the compressing hardware that needs to match the performance of the original system to avoid becoming a bottleneck. Advancing the area of lossless data compression hardware, hence, offers a valid motivation with the potential of doubling the performance of the system that incorporates it with minimum investment. This work starts by presenting an analysis of current compression methods with the objective of identifying the factors that limit performance and also the factors that increase it. [Continues.

    Hardware Software Synthesis of a H.264 / AVC Baseline Profile Decoder

    Get PDF
    The latest video compression standard is a joint effort between ITU and MPEG known as H.264/AVC. As with any video compression standard the H.264/AVC uses computationally intensive algorithms to maximize performance. During decompression these algorithms must be applied in real-time, processing 30 frames a second. This can be done in software, specialized hardware, or a combination of the two. Software solutions allow for maximum portability and ease of design, but General Purpose Processors (GPP) can not take full advantage of the parallelizable algorithms that the H.264 decoder is based upon. Specialized hardware solutions, on the other hand, allow concurrent data and instruction paths, but do not offer a high level of abstraction for cross platform development. Recent work by Xilinx has resulted in the advent of the MicroBlaze soft-processor that is a stand alone microcontroller built from an FPGA. The MicroBlaze provides a specialized hardware medium to run software on-chip with VHDL entities. The goal of this thesis was to model and simulate a software hardware hybrid H.264/AVC Baseline Profile decoder using VHDL and a soft-processor. It was proposed to divide all highly sequential calculations (run-length and CALVC decoding) and control data flow into software and perform the remaining calculations (prediction, inverse transform, inverse quantization, etc.) in hardware modules. The software runs on Xilinx\u27 s MicroBlaze soft-processor and the hardware was designed using VHDL. A major advantage of soft-processors over GPP\u27s, is that it hardware instantiations reside on-chip with the processor. The software and MicroBlaze soft-processor were simulated in a test bench and the results proved that the MicroBlaze could not handle the encoded bit-stream in real-time. For this reason the hardware interface and hardware decoder were never fully implemented. The scope of the thesis covers the H.264 Baseline Profile standard, MicroBlaze processor, the implemented software solution, and the proposed hardware counterpart

    Design of a digital compression technique for shuttle television

    Get PDF
    The determination of the performance and hardware complexity of data compression algorithms applicable to color television signals, were studied to assess the feasibility of digital compression techniques for shuttle communications applications. For return link communications, it is shown that a nonadaptive two dimensional DPCM technique compresses the bandwidth of field-sequential color TV to about 13 MBPS and requires less than 60 watts of secondary power. For forward link communications, a facsimile coding technique is recommended which provides high resolution slow scan television on a 144 KBPS channel. The onboard decoder requires about 19 watts of secondary power

    A Hybrid voice/text electronic mail system: an application of the integrated services digital network

    Get PDF
    The objective of this thesis is to present a useful application for the Integrated Services Digital Network (ISDN) that is expected to one day replace the analog phone system in use today. ISDN itself and its continuing evolution are detailed. The system developed as a part of this thesis involved the creation of an inexpensive phone terminal that can serve as an ISDN terminal and also as a bridge to a Local Area Network (LAN). The phone terminal provides a hybrid electronic mail system that allows the attachment of speech to text within a message. Messages created with this phone terminal could theoretically be sent locally using the LAN interface and globally using ISDN to other users with either phone terminals or multimedia personal computers. For this project, the two phone terminals created were interconnected via an Ethernet and using an 80486 PC to act as a Central Office System. This Central Office System provides speech/message storage for the phone terminals. It makes use of speech compression techniques to minimize the storage requirements. The speech compression techniques used as well as the field of speech coding in general are discussed

    Capsule endoscopy system with novel imaging algorithms

    Get PDF
    Wireless capsule endoscopy (WCE) is a state-of-the-art technology to receive images of human intestine for medical diagnostics. In WCE, the patient ingests a specially designed electronic capsule which has imaging and wireless transmission capabilities inside it. While the capsule travels through the gastrointestinal (GI) tract, it captures images and sends them wirelessly to an outside data logger unit. The data logger stores the image data and then they are transferred to a personal computer (PC) where the images are reconstructed and displayed for diagnosis. The key design challenge in WCE is to reduce the area and power consumption of the capsule while maintaining acceptable image reconstruction. In this research, the unique properties of WCE images are identified by analyzing hundreds of endoscopic images and video frames, and then these properties are used to develop novel and low complexity compression algorithms tailored for capsule endoscopy. The proposed image compressor consists of a new YEF color space converter, lossless prediction coder, customizable chrominance sub-sampler and an efficient Golomb-Rice encoder. The scheme has both lossy and lossless modes and is further customized to work with two lighting modes – conventional white light imaging (WLI) and emerging narrow band imaging (NBI). The average compression ratio achieved using the proposed lossy compression algorithm is 80.4% for WBI and 79.2% for NBI with high reconstruction quality index for both bands. Two surveys have been conducted which show that the reconstructed images have high acceptability among medical imaging doctors and gastroenterologists. The imaging algorithms have been realized in hardware description language (HDL) and their functionalities have been verified in field programmable gate array (FPGA) board. Later it was implemented in a 0.18 μm complementary metal oxide semiconductor (CMOS) technology and the chip was fabricated. Due to the low complexity of the core compressor, it consumes only 43 µW of power and 0.032 mm2 of area. The compressor is designed to work with commercial low-power image sensor that outputs image pixels in raster scan fashion, eliminating the need of significant input buffer memory. To demonstrate the advantage, a prototype of the complete WCE system including an FPGA based electronic capsule, a microcontroller based data logger unit and a Windows based image reconstruction software have been developed. The capsule contains the proposed low complexity image compressor and can generate both lossy and lossless compressed bit-stream. The capsule prototype also supports both white light imaging (WLI) and narrow band imaging (NBI) imaging modes and communicates with the data logger in full duplex fashion, which enables configuring the image size and imaging mode in real time during the examination. The developed data logger is portable and has a high data rate wireless connectivity including Bluetooth, graphical display for real time image viewing with state-of-the-art touch screen technology. The data are logged in micro SD cards and can be transferred to PC or Smartphone using card reader, USB interface, or Bluetooth wireless link. The workstation software can decompress and show the reconstructed images. The images can be navigated, marked, zoomed and can be played as video. Finally, ex-vivo testing of the WCE system has been done in pig's intestine to validate its performance
    corecore