6,572 research outputs found

    An innovative two-stage data compression scheme using adaptive block merging technique

    Get PDF
    Test data has increased enormously owing to the rising on-chip complexity of integrated circuits. It further increases the test data transportation time and tester memory. The non-correlated test bits increase the issue of the test power. This paper presents a two-stage block merging based test data minimization scheme which reduces the test bits, test time and test power. A test data is partitioned into blocks of fixed sizes which are compressed using two-stage encoding technique. In stage one, successive blocks are merged to retain a representative block. In stage two, the retained pattern block is further encoding based on the existence of ten different subcases between the sub-block formed by splitting the retained pattern block into two halves. Non-compatible blocks are also split into two sub-blocks and tried for encoded using lesser bits. Decompression architecture to retrieve the original test data is presented. Simulation results obtained corresponding to different ISCAS′89 benchmarks circuits reflect its effectiveness in achieving better compression

    Network-on-Chip

    Get PDF
    Addresses the Challenges Associated with System-on-Chip Integration Network-on-Chip: The Next Generation of System-on-Chip Integration examines the current issues restricting chip-on-chip communication efficiency, and explores Network-on-chip (NoC), a promising alternative that equips designers with the capability to produce a scalable, reusable, and high-performance communication backbone by allowing for the integration of a large number of cores on a single system-on-chip (SoC). This book provides a basic overview of topics associated with NoC-based design: communication infrastructure design, communication methodology, evaluation framework, and mapping of applications onto NoC. It details the design and evaluation of different proposed NoC structures, low-power techniques, signal integrity and reliability issues, application mapping, testing, and future trends. Utilizing examples of chips that have been implemented in industry and academia, this text presents the full architectural design of components verified through implementation in industrial CAD tools. It describes NoC research and developments, incorporates theoretical proofs strengthening the analysis procedures, and includes algorithms used in NoC design and synthesis. In addition, it considers other upcoming NoC issues, such as low-power NoC design, signal integrity issues, NoC testing, reconfiguration, synthesis, and 3-D NoC design. This text comprises 12 chapters and covers: The evolution of NoC from SoC—its research and developmental challenges NoC protocols, elaborating flow control, available network topologies, routing mechanisms, fault tolerance, quality-of-service support, and the design of network interfaces The router design strategies followed in NoCs The evaluation mechanism of NoC architectures The application mapping strategies followed in NoCs Low-power design techniques specifically followed in NoCs The signal integrity and reliability issues of NoC The details of NoC testing strategies reported so far The problem of synthesizing application-specific NoCs Reconfigurable NoC design issues Direction of future research and development in the field of NoC Network-on-Chip: The Next Generation of System-on-Chip Integration covers the basic topics, technology, and future trends relevant to NoC-based design, and can be used by engineers, students, and researchers and other industry professionals interested in computer architecture, embedded systems, and parallel/distributed systems

    The 1993 Space and Earth Science Data Compression Workshop

    Get PDF
    The Earth Observing System Data and Information System (EOSDIS) is described in terms of its data volume, data rate, and data distribution requirements. Opportunities for data compression in EOSDIS are discussed

    Digital encoding of black and white facsimile signals

    Get PDF
    As the costs of digital signal processing and memory hardware are decreasing each year compared to those of transmission, it is increasingly economical to apply sophisticated source encoding techniques to reduce the transmission time for facsimile documents. With this intent, information lossy encoding schemes have been investigated in which the encoder is divided into two stages. Firstly, preprocessing, which removes redundant information from the original documents, and secondly, actual encoding of the preprocessed documents. [Continues.

    The contour tree image encoding technique and file format

    Get PDF
    The process of contourization is presented which converts a raster image into a discrete set of plateaux or contours. These contours can be grouped into a hierarchical structure, defining total spatial inclusion, called a contour tree. A contour coder has been developed which fully describes these contours in a compact and efficient manner and is the basis for an image compression method. Simplification of the contour tree has been undertaken by merging contour tree nodes thus lowering the contour tree's entropy. This can be exploited by the contour coder to increase the image compression ratio. By applying general and simple rules derived from physiological experiments on the human vision system, lossy image compression can be achieved which minimises noticeable artifacts in the simplified image. The contour merging technique offers a complementary lossy compression system to the QDCT (Quantised Discrete Cosine Transform). The artifacts introduced by the two methods are very different; QDCT produces a general blurring and adds extra highlights in the form of overshoots, whereas contour merging sharpens edges, reduces highlights and introduces a degree of false contouring. A format based on the contourization technique which caters for most image types is defined, called the contour tree image format. Image operations directly on this compressed format have been studied which for certain manipulations can offer significant operational speed increases over using a standard raster image format. A couple of examples of operations specific to the contour tree format are presented showing some of the features of the new format.Science and Engineering Research Counci

    Synthesis for circuit reliability

    Get PDF
    textElectrical and Computer Engineerin

    Self-navigation with compressed sensing for 2D translational motion correction in free-breathing coronary MRI:a feasibility study

    Get PDF
    PURPOSE: Respiratory motion correction remains a challenge in coronary magnetic resonance imaging (MRI) and current techniques, such as navigator gating, suffer from sub-optimal scan efficiency and ease-of-use. To overcome these limitations, an image-based self-navigation technique is proposed that uses "sub-images" and compressed sensing (CS) to obtain translational motion correction in 2D. The method was preliminarily implemented as a 2D technique and tested for feasibility for targeted coronary imaging. METHODS: During a 2D segmented radial k-space data acquisition, heavily undersampled sub-images were reconstructed from the readouts collected during each cardiac cycle. These sub-images may then be used for respiratory self-navigation. Alternatively, a CS reconstruction may be used to create these sub-images, so as to partially compensate for the heavy undersampling. Both approaches were quantitatively assessed using simulations and in vivo studies, and the resulting self-navigation strategies were then compared to conventional navigator gating. RESULTS: Sub-images reconstructed using CS showed a lower artifact level than sub-images reconstructed without CS. As a result, the final image quality was significantly better when using CS-assisted self-navigation as opposed to the non-CS approach. Moreover, while both self-navigation techniques led to a 69% scan time reduction (as compared to navigator gating), there was no significant difference in image quality between the CS-assisted self-navigation technique and conventional navigator gating, despite the significant decrease in scan time. CONCLUSIONS: CS-assisted self-navigation using 2D translational motion correction demonstrated feasibility of producing coronary MRA data with image quality comparable to that obtained with conventional navigator gating, and does so without the use of additional acquisitions or motion modeling, while still allowing for 100% scan efficiency and an improved ease-of-use. In conclusion, compressed sensing may become a critical adjunct for 2D translational motion correction in free-breathing cardiac imaging with high spatial resolution. An expansion to modern 3D approaches is now warranted

    New regularization technique for MRI sense reconstruction in studies of coronary angiography

    Get PDF
    Coronary Magnetic Resonance Angiography (coronary MRA) is an imaging modality based on Magnetic Resonance Imaging that extracts information from the coronary vessels. Unlike X-ray angiography, it does not make use of ionizing radiation and it has the option of not making use of contrast agents, allowing for non-invasive studies free of contraindications from these contrast agents. However, the acquisition time for a coronary MRA is much longer than for an X-ray angiography. For that reason, many approaches have been proposed to reduce its acquisition time. One of these approaches is Sensitivity Encoding or SENSE reconstruction, a method that reduces by a tunable factor the data to be acquired from the patient by making use of the sensitivity maps from several surface coils that receive all the information from the patient in parallel (at the same time). It is an effective method for reducing acquisition time, but it also introduces noise in the final image, especially as the reduction of data is stronger. For that purpose, algorithms known as regularization algorithms have been proposed to reduce this noise together with the introduction of prior information from the coil that excites the patient tissues, known as body coil. Although the proposed regularization algorithms are quite good in denoising SENSE-reconstructed images, alternative prior information that has not been used until now may reduce even more the noise in the image. This thesis proposes a new algorithm based on regularized SENSE reconstruction that uses a low-pass filtered image pre-reconstructed with SENSE as alternative prior information. Until now, the only prior information that regularized SENSE reconstruction has received has been the one provided by the body coil, which is very crude and homogenous, so it is expected that if an image with an alternative and detailed prior information is introduced in SENSE reconstruction, noise may be reduced and image quality may increase. The algorithm was implemented in IDL™ and tested with data from a volunteer. The results provided were compared to state-of-the-art methods that used either no prior information or only used body coil information as prior information. These methods were evaluated in terms of noise, Signal-to-Noise Ratio (SNR), Contrast-to-Noise Ratio (CNR) and with a visual inspection. However, the compared results showed that even by introducing alternative prior information, the images could not be denoised more than the current method that uses body coil a priori information. Nevertheless, even if the algorithm failed to denoise SENSE reconstructed images more than the current methods did, this thesis can help to look for alternative paths for SENSE reconstruction denoising in the future.Ingeniería Biomédic
    corecore