1,872 research outputs found

    Test Slice Difference Technique for Low-Transition Test Data Compression

    Get PDF
    [[notice]]補正完畢[[incitationindex]]EI[[booktype]]電子

    Reducing Switching Activity by Test Slice Difference Technique for Test Volume Compression

    Get PDF
    [[abstract]]This paper presents a test slice difference (TSD) technique to improve test data compression. It is an efficient method and only needs one scan cell. Consequently, hardware overhead is much lower than cyclical scan chains (CSR). As the complexity of VLSI continues to grow, excessive power supply noise has become seriously. We propose a new compression scheme which smooth down the switching activity and reduce the test data volume simultaneously.[[conferencetype]]國際[[conferencelocation]]Taipei, Taiwa

    Contemporary Affirmation of SPIHT Improvements in Image Coding

    Get PDF
    Set partitioning in hierarchal trees (SPIHT) is actually a widely-used compression algorithm for wavelet altered images. On most algorithms developed, SPIHT algorithm from the time its introduction in 1996 for image compression has got lots of interest. Though SPIHT is considerably simpler and efficient than several present compression methods since it's a completely inserted codec, provides good image quality, large PSNR, optimized for modern image transmission, efficient conjunction with error defense, form information on demand and hence element powerful error correction decreases from starting to finish but still it has some downsides that need to be taken away for its better use therefore since its development it has experienced many adjustments in its original model. This document presents a survey on several different improvements in SPIHT in certain fields as velocity, redundancy, quality, error resilience, sophistication, and compression ratio and memory requirement

    Current video compression algorithms: Comparisons, optimizations, and improvements

    Full text link
    Compression algorithms have evolved significantly in recent years. Audio, still image, and video can be compressed significantly by taking advantage of the natural redundancies that occur within them. Video compression in particular has made significant advances. MPEG-1 and MPEG-2, two of the major video compression standards, allowed video to be compressed at very low bit rates compared to the original video. The compression ratio for video that is perceptually lossless (losses can\u27t be visually perceived) can even be as high as 40 or 50 to 1 for certain videos. Videos with a small degradation in quality can be compressed at 100 to 1 or more; Although the MPEG standards provided low bit rate compression, even higher quality compression is required for efficient transmission over limited bandwidth networks, wireless networks, and broadcast mediums. Significant gains have been made over the current MPEG-2 standard in a newly developed standard called the Advanced Video Coder, also known as H.264 and MPEG-4 part 10. (Abstract shortened by UMI.)

    Quantitative DWI as an Early Imaging Biomarker of the Response to Chemoradiation in Esophageal Cancer

    Get PDF
    For patients diagnosed with stages IIa-IIb esophageal cancer, the current standard of care treatment is tri-modality therapy (TMT), where neoadjuvant chemoradiation (nCRT) is followed by surgical resection. Histopathology of resected tumors reveals that pathological complete response (pCR) is achieved in 20-30% of patients through nCRT alone. Because of the high mortality and morbidity associated with esophagectomy, it may be advantageous for patients exhibiting pCR from nCRT alone to be placed under observation rather than completing their TMT. Therefore, a method for predicting response at an early time-point during nCRT is highly desirable. Conventional methods such as endoscopic ultrasound, re-biopsy, and morphologic imaging are insufficient for this purpose. During nCRT, morphologic changes in tumors are often preceded by changes in the tumor biology. Diffusion Weighed Imaging (DWI) is an MRI modality which is sensitive to microscopic motion of water molecules in tissue. Quantitative DWI provides a measure of the cellular microenvironment which is impacted by cellularity, extra-cellular volume fraction, structure of the extracellular matrix, and cellular membranes. This work sought to investigate if changes in quantitative DWI may be used as an early imaging biomarker for the prediction of response to nCRT in esophageal cancer. DWI scans were performed on a small group of esophageal cancer patients (stages IIa to IIIb) before, at interim, and after completion of their nCRT. Quantitative diffusion parameter maps were estimated for DWI scans using the following models of diffusion: mono-exponential, intra-voxel incoherent motion (IVIM), and kurtosis. Summary measures of quantitative diffusion parameters were extracted from tumor voxels through volumetric contouring. These summary measures were retrospectively compared between histopathologically confirmed groupings of patients as pCR and non-pCR. The study found that the relative change in mean ADC could completely separate groupings of pCR and non-pCR patients (AUC=1) at a cutoff of 27.7%. Measurement by volume contouring was shown to be highly reproducible between readers. This pilot study demonstrates the promise of using DWI for organ sparing approaches after nCRT in esophageal cancer

    Need for speed:Achieving fast image processing in acute stroke care

    Get PDF
    This thesis aims to investigate the use of high-performance computing (HPC) techniques in developing imaging biomarkers to support the clinical workflow of acute stroke patients. In the first part of this thesis, we evaluate different HPC technologies and how such technologies can be leveraged by different image analysis applications used in the context of acute stroke care. More specifically, we evaluated how computers with multiple computing devices can be used to accelerate medical imaging applications in Chapter 2. Chapter 3 proposes a novel data compression technique that efficiently processes CT perfusion (CTP) images in GPUs. Unfortunately, the size of CTP datasets makes data transfers to computing devices time-consuming and, therefore, unsuitable in acute situations. Chapter 4 further evaluates the algorithm's usefulness proposed in Chapter 3 with two different applications: a double threshold segmentation and a time-intensity profile similarity (TIPS) bilateral filter to reduce noise in CTP scans. Finally, Chapter 5 presents a cloud platform for deploying high-performance medical applications for acute stroke patients. In Part 2 of this thesis, Chapter 6 presents a convolutional neural network (CNN) for detecting and volumetric segmentation of subarachnoid hemorrhages (SAH) in non-contrast CT scans. Chapter 7 proposed another method based on CNNs to quantify the final infarct volumes in follow-up non-contrast CT scans from ischemic stroke patients

    High throughput image compression and decompression on GPUs

    Get PDF
    Diese Arbeit befasst sich mit der Entwicklung eines GPU-freundlichen, intra-only, Wavelet-basierten Videokompressionsverfahrens mit hohem Durchsatz, das für visuell verlustfreie Anwendungen optimiert ist. Ausgehend von der Beobachtung, dass der JPEG 2000 Entropie-Kodierer ein Flaschenhals ist, werden verschiedene algorithmische Änderungen vorgeschlagen und bewertet. Zunächst wird der JPEG 2000 Selective Arithmetic Coding Mode auf der GPU realisiert, wobei sich die Erhöhung des Durchsatzes hierdurch als begrenzt zeigt. Stattdessen werden zwei nicht standard-kompatible Änderungen vorgeschlagen, die (1) jede Bitebebene in nur einem einzelnen Pass verarbeiten (Single-Pass-Modus) und (2) einen echten Rohcodierungsmodus einführen, der sample-weise parallelisierbar ist und keine aufwendige Kontextmodellierung erfordert. Als nächstes wird ein alternativer Entropiekodierer aus der Literatur, der Bitplane Coder with Parallel Coefficient Processing (BPC-PaCo), evaluiert. Er gibt Signaladaptivität zu Gunsten von höherer Parallelität auf und daher wird hier untersucht und gezeigt, dass ein aus verschiedensten Testsequenzen gemitteltes statisches Wahrscheinlichkeitsmodell eine kompetitive Kompressionseffizienz erreicht. Es wird zudem eine Kombination von BPC-PaCo mit dem Single-Pass-Modus vorgeschlagen, der den Speedup gegenüber dem JPEG 2000 Entropiekodierer von 2,15x (BPC-PaCo mit zwei Pässen) auf 2,6x (BPC-PaCo mit Single-Pass-Modus) erhöht auf Kosten eines um 0,3 dB auf 1,0 dB erhöhten Spitzen-Signal-Rausch-Verhältnis (PSNR). Weiter wird ein paralleler Algorithmus zur Post-Compression Ratenkontrolle vorgestellt sowie eine parallele Codestream-Erstellung auf der GPU. Es wird weiterhin ein theoretisches Laufzeitmodell formuliert, das es durch Benchmarking von einer GPU ermöglicht die Laufzeit einer Routine auf einer anderen GPU vorherzusagen. Schließlich wird der erste JPEG XS GPU Decoder vorgestellt und evaluiert. JPEG XS wurde als Low Complexity Codec konzipiert und forderte erstmals explizit GPU-Freundlichkeit bereits im Call for Proposals. Ab Bitraten über 1 bpp ist der Decoder etwa 2x schneller im Vergleich zu JPEG 2000 und 1,5x schneller als der schnellste hier vorgestellte Entropiekodierer (BPC-PaCo mit Single-Pass-Modus). Mit einer GeForce GTX 1080 wird ein Decoder Durchsatz von rund 200 fps für eine UHD-4:4:4-Sequenz erreicht.This work investigates possibilities to create a high throughput, GPU-friendly, intra-only, Wavelet-based video compression algorithm optimized for visually lossless applications. Addressing the key observation that JPEG 2000’s entropy coder is a bottleneck and might be overly complex for a high bit rate scenario, various algorithmic alterations are proposed. First, JPEG 2000’s Selective Arithmetic Coding mode is realized on the GPU, but the gains in terms of an increased throughput are shown to be limited. Instead, two independent alterations not compliant to the standard are proposed, that (1) give up the concept of intra-bit plane truncation points and (2) introduce a true raw-coding mode that is fully parallelizable and does not require any context modeling. Next, an alternative block coder from the literature, the Bitplane Coder with Parallel Coefficient Processing (BPC-PaCo), is evaluated. Since it trades signal adaptiveness for increased parallelism, it is shown here how a stationary probability model averaged from a set of test sequences yields competitive compression efficiency. A combination of BPC-PaCo with the single-pass mode is proposed and shown to increase the speedup with respect to the original JPEG 2000 entropy coder from 2.15x (BPC-PaCo with two passes) to 2.6x (proposed BPC-PaCo with single-pass mode) at the marginal cost of increasing the PSNR penalty by 0.3 dB to at most 1 dB. Furthermore, a parallel algorithm is presented that determines the optimal code block bit stream truncation points (given an available bit rate budget) and builds the entire code stream on the GPU, reducing the amount of data that has to be transferred back into host memory to a minimum. A theoretical runtime model is formulated that allows, based on benchmarking results on one GPU, to predict the runtime of a kernel on another GPU. Lastly, the first ever JPEG XS GPU-decoder realization is presented. JPEG XS was designed to be a low complexity codec and for the first time explicitly demanded GPU-friendliness already in the call for proposals. Starting at bit rates above 1 bpp, the decoder is around 2x faster compared to the original JPEG 2000 and 1.5x faster compared to JPEG 2000 with the fastest evaluated entropy coder (BPC-PaCo with single-pass mode). With a GeForce GTX 1080, a decoding throughput of around 200 fps is achieved for a UHD 4:4:4 sequence

    Research and developments of distributed video coding

    Get PDF
    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.The recent developed Distributed Video Coding (DVC) is typically suitable for the applications such as wireless/wired video sensor network, mobile camera etc. where the traditional video coding standard is not feasible due to the constrained computation at the encoder. With DVC, the computational burden is moved from encoder to decoder. The compression efficiency is achieved via joint decoding at the decoder. The practical application of DVC is referred to Wyner-Ziv video coding (WZ) where the side information is available at the decoder to perform joint decoding. This join decoding inevitably causes a very complex decoder. In current WZ video coding issues, many of them emphasise how to improve the system coding performance but neglect the huge complexity caused at the decoder. The complexity of the decoder has direct influence to the system output. The beginning period of this research targets to optimise the decoder in pixel domain WZ video coding (PDWZ), while still achieves similar compression performance. More specifically, four issues are raised to optimise the input block size, the side information generation, the side information refinement process and the feedback channel respectively. The transform domain WZ video coding (TDWZ) has distinct superior performance to the normal PDWZ due to the exploitation in spatial direction during the encoding. However, since there is no motion estimation at the encoder in WZ video coding, the temporal correlation is not exploited at all at the encoder in all current WZ video coding issues. In the middle period of this research, the 3D DCT is adopted in the TDWZ to remove redundancy in both spatial and temporal direction thus to provide even higher coding performance. In the next step of this research, the performance of transform domain Distributed Multiview Video Coding (DMVC) is also investigated. Particularly, three types transform domain DMVC frameworks which are transform domain DMVC using TDWZ based 2D DCT, transform domain DMVC using TDWZ based on 3D DCT and transform domain residual DMVC using TDWZ based on 3D DCT are investigated respectively. One of the important applications of WZ coding principle is error-resilience. There have been several attempts to apply WZ error-resilient coding for current video coding standard e.g. H.264/AVC or MEPG 2. The final stage of this research is the design of WZ error-resilient scheme for wavelet based video codec. To balance the trade-off between error resilience ability and bandwidth consumption, the proposed scheme emphasises the protection of the Region of Interest (ROI) area. The efficiency of bandwidth utilisation is achieved by mutual efforts of WZ coding and sacrificing the quality of unimportant area. In summary, this research work contributed to achieves several advances in WZ video coding. First of all, it is targeting to build an efficient PDWZ with optimised decoder. Secondly, it aims to build an advanced TDWZ based on 3D DCT, which then is applied into multiview video coding to realise advanced transform domain DMVC. Finally, it aims to design an efficient error-resilient scheme for wavelet video codec, with which the trade-off between bandwidth consumption and error-resilience can be better balanced

    Characterization of Porosity Defects in Selectively Laser Melted IN718 and Ti- 6A1-4V via Synchrotron X-Ray Computed Tomography

    Get PDF
    Additive manufacturing (AM) is a method of fabrication involving the joining of feedstock material together to form a structure. Additive manufacturing has been developed for use with polymers, ceramics, composites, biomaterials, and metals. Of the metal additive manufacturing techniques, one of the most commonly employed for commercial and government applications is selective laser melting (SLM). SLM operates by using a high-powered laser to melt feedstock metal powder, layer by layer, until the desired near-net shape is completed. Due to the inherent function of AM and particularly SLM, it holds much promise in the ability to design parts without geometrical constraint, cost-effectively manufacture them, and reduce material waste. Because of this, SLM has gained traction in the aerospace, automotive, and medical device industries, which often use uniquely shaped parts for specific functions. These industries also have a tendency to use high performance metallic alloys that can withstand the sometimes-extreme operating conditions that the parts experience. Two alloys that are often used in these parts are Inconel 718 (IN718) and Ti-6Al-4V (Ti64). Both of these materials have been routinely used in SLM processing but have been often marked by porosity defects in the as-built state. Since large amounts of porosity is known to limit material mechanical performance, especially in fatigue life, there is a general need to inspect and quantify this material characteristic before part use in these industries. One of the most advanced porosity inspection methods is via X-ray computed tomography (CT). CT uses a detector to capture scattered X-rays after passing through the part. The detector images are then reconstructed to create a tomograph that can be analyzed using image processing techniques to visualize and quantify porosity. In this research, CT was performed on both materials at a 30 μm “low resolution” (LR) for different build orientations and processing conditions. Furthermore, a synchrotron beamline was used to conduct CT on small samples of the SLM IN718 and Ti64 specimens at a 0.65 μm “high resolution” (HR), which to the author’s knowledge is the highest resolution (for SLM IN718) and matches the highest resolution (for SLM Ti64) reported for porosity CT investigations of these materials. Tomographs were reconstructed using TomoPy 1.0.0, processed using ImageJ and Avizo 9.0.2, and quantified in Avizo and Matlab. Results showed a relatively low amount of porosity in the materials overall, but a several order of magnitude increase in quantifiable porosity volume fraction from LR to HR observations. Furthermore, quantifications and visualizations showed a propensity for more and larger pores to be present near the free surfaces of the specimens. Additionally, a plurality of pores in the HR samples were found to be in close proximity (10 μm or less) to each other
    corecore