210 research outputs found

    A VHDL design for hardware assistance of fractal image compression

    Get PDF
    Fractal image compression schemes have several unusual and useful attributes, including resolution independence, high compression ratios, good image quality, and rapid decompression. Despite this, one major difficulty has prevented their widespread adoption: the extremely high computational complexity of compression. Fractal image compression algorithms represent an image as a series of contractive transformations, each of which maps a large domain block to a smaller range block. Given only this set of transformations, it is possible to reconstruct an approximation of the original image by iteratively applying the transformations to an arbitrary image. Compression consists of partitioning the image into range blocks and finding a suitable transformation of a domain block to represent each one. This search for transformations must generally be done using a brute force approach, comparing successive domain blocks until a suitable match is found. Some algorithmic improvements have been found, but none are adequate to reduce the required compression time to something reasonable for many uses. This thesis presents a new ASIC design which performs a large number of the required comparisons in parallel, yielding a substantial speedup over a program on a general-purpose computer system. This ASIC is designed in VHDL, which may be synthesized to many different target architectures. The design has considerable flexibility which makes it applicable to different images and applications. The design is based around a pipeline of units that each compare one range block with a series of domain blocks which are fed through the pipeline. Comparisons are made to minimize the mean square error (MSE) of a transform given a linear mapping of the intensity values. This is, by far, the most common minimization strategy used in the literature. The speedup provided by this design is estimated to be about 1,000 times for 256 x 256 images divided into 8x8 blocks over a sequential processor given similar implementation technologies

    Hardware Acceleration of the Embedded Zerotree Wavelet Algorithm

    Get PDF
    The goal of this project was to gain experience in designing and implementing a microelectronic system to acclerate the execution of a time-consuming software algorithm, the Embedded Zerotree Wavelet (EZW), which is used in multimedia applications. The algorithm was implemented using MATLAB to be certain it was fully understood and to serve as a validation reference. Then, the algorithm was mapped into a hardware description language, VHDL, and its resulting implementation verified with the golden reference. The hardware description was then targeted to a field-programmable gate array (FPGA). Significant acceleration was achieved since the hardware implementation in a FPGA (Xilinx Virtex-1000E using a 8.315 MHz clock) ran 10,000 times faster than the MATLAB implementation on a SUN-220 workstation. Additional speedup exploiting the parallel capabilities of the FPGA was not achieved since the EZW algorithm utilizes only sequential operations

    Mengenal pasti tahap pengetahuan pelajar tahun akhir Ijazah Sarjana Muda Kejuruteraan di KUiTTHO dalam bidang keusahawanan dari aspek pengurusan modal

    Get PDF
    Malaysia ialah sebuah negara membangun di dunia. Dalam proses pembangunan ini, hasrat negara untuk melahirkan bakal usahawan beijaya tidak boleh dipandang ringan. Oleh itu, pengetahuan dalam bidang keusahawanan perlu diberi perhatian dengan sewajarnya; antara aspek utama dalam keusahawanan ialah modal. Pengurusan modal yang tidak cekap menjadi punca utama kegagalan usahawan. Menyedari hakikat ini, kajian berkaitan Pengurusan Modal dijalankan ke atas 100 orang pelajar Tahun Akhir Kejuruteraan di KUiTTHO. Sampel ini dipilih kerana pelajar-pelajar ini akan menempuhi alam pekeijaan di mana mereka boleh memilih keusahawanan sebagai satu keijaya. Walau pun mereka bukanlah pelajar dari jurusan perniagaan, namun mereka mempunyai kemahiran dalam mereka cipta produk yang boleh dikomersialkan. Hasil dapatan kajian membuktikan bahawa pelajar-pelajar ini berminat dalam bidang keusahawanan namun masih kurang pengetahuan tentang pengurusan modal terutamanya dalam menentukan modal permulaan, pengurusan modal keija dan caracara menentukan pembiayaan kewangan menggunakan kaedah jualan harian. Oleh itu, satu garis panduan Pengurusan Modal dibina untuk memberi pendedahan kepada mereka

    The Space and Earth Science Data Compression Workshop

    Get PDF
    This document is the proceedings from a Space and Earth Science Data Compression Workshop, which was held on March 27, 1992, at the Snowbird Conference Center in Snowbird, Utah. This workshop was held in conjunction with the 1992 Data Compression Conference (DCC '92), which was held at the same location, March 24-26, 1992. The workshop explored opportunities for data compression to enhance the collection and analysis of space and Earth science data. The workshop consisted of eleven papers presented in four sessions. These papers describe research that is integrated into, or has the potential of being integrated into, a particular space and/or Earth science data information system. Presenters were encouraged to take into account the scientists's data requirements, and the constraints imposed by the data collection, transmission, distribution, and archival system

    Efficient reconfigurable architectures for 3D medical image compression

    Get PDF
    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.Recently, the more widespread use of three-dimensional (3-D) imaging modalities, such as magnetic resonance imaging (MRI), computed tomography (CT), positron emission tomography (PET), and ultrasound (US) have generated a massive amount of volumetric data. These have provided an impetus to the development of other applications, in particular telemedicine and teleradiology. In these fields, medical image compression is important since both efficient storage and transmission of data through high-bandwidth digital communication lines are of crucial importance. Despite their advantages, most 3-D medical imaging algorithms are computationally intensive with matrix transformation as the most fundamental operation involved in the transform-based methods. Therefore, there is a real need for high-performance systems, whilst keeping architectures exible to allow for quick upgradeability with real-time applications. Moreover, in order to obtain efficient solutions for large medical volumes data, an efficient implementation of these operations is of significant importance. Reconfigurable hardware, in the form of field programmable gate arrays (FPGAs) has been proposed as viable system building block in the construction of high-performance systems at an economical price. Consequently, FPGAs seem an ideal candidate to harness and exploit their inherent advantages such as massive parallelism capabilities, multimillion gate counts, and special low-power packages. The key achievements of the work presented in this thesis are summarised as follows. Two architectures for 3-D Haar wavelet transform (HWT) have been proposed based on transpose-based computation and partial reconfiguration suitable for 3-D medical imaging applications. These applications require continuous hardware servicing, and as a result dynamic partial reconfiguration (DPR) has been introduced. Comparative study for both non-partial and partial reconfiguration implementation has shown that DPR offers many advantages and leads to a compelling solution for implementing computationally intensive applications such as 3-D medical image compression. Using DPR, several large systems are mapped to small hardware resources, and the area, power consumption as well as maximum frequency are optimised and improved. Moreover, an FPGA-based architecture of the finite Radon transform (FRAT)with three design strategies has been proposed: direct implementation of pseudo-code with a sequential or pipelined description, and block random access memory (BRAM)- based method. An analysis with various medical imaging modalities has been carried out. Results obtained for image de-noising implementation using FRAT exhibits promising results in reducing Gaussian white noise in medical images. In terms of hardware implementation, promising trade-offs on maximum frequency, throughput and area are also achieved. Furthermore, a novel hardware implementation of 3-D medical image compression system with context-based adaptive variable length coding (CAVLC) has been proposed. An evaluation of the 3-D integer transform (IT) and the discrete wavelet transform (DWT) with lifting scheme (LS) for transform blocks reveal that 3-D IT demonstrates better computational complexity than the 3-D DWT, whilst the 3-D DWT with LS exhibits a lossless compression that is significantly useful for medical image compression. Additionally, an architecture of CAVLC that is capable of compressing high-definition (HD) images in real-time without any buffer between the quantiser and the entropy coder is proposed. Through a judicious parallelisation, promising results have been obtained with limited resources. In summary, this research is tackling the issues of massive 3-D medical volumes data that requires compression as well as hardware implementation to accelerate the slowest operations in the system. Results obtained also reveal a significant achievement in terms of the architecture efficiency and applications performance.Ministry of Higher Education Malaysia (MOHE), Universiti Tun Hussein Onn Malaysia (UTHM) and the British Counci

    Development of Low Power Image Compression Techniques

    Get PDF
    Digital camera is the main medium for digital photography. The basic operation performed by a simple digital camera is, to convert the light energy to electrical energy, then the energy is converted to digital format and a compression algorithm is used to reduce memory requirement for storing the image. This compression algorithm is frequently called for capturing and storing the images. This leads us to develop an efficient compression algorithm which will give the same result as that of the existing algorithms with low power consumption. As a result the new algorithm implemented camera can be used for capturing more images then the previous one. 1) Discrete Cosine Transform (DCT) based JPEG is an accepted standard for lossy compression of still image. Quantisation is mainly responsible for the amount loss in the image quality in the process of lossy compression. A new Energy Quantisation (EQ) method proposed for speeding up the coding and decoding procedure while preserving image qu..

    Real-Time Lossless Compression of SoC Trace Data

    Get PDF
    Nowadays, with the increasing complexity of System-on-Chip (SoC), traditional debugging approaches are not enough in multi-core architecture systems. Hardware tracing becomes necessary for performance analysis in these systems. The problem is that the size of collected trace data through hardware-based tracing techniques is usually extremely large due to the increasing complexity of System-on-Chips. Hence on-chip trace compression performed in hardware is needed to reduce the amount of transferred or stored data. In this dissertation, the feasibility of different types of lossless data compression algorithms in hardware implementation are investigated and examined. A lossless data compression algorithm LZ77 is selected, analyzed, and optimized to Nexus traces data. In order to meet the hardware cost and compression performances requirements for the real-time compression, an optimized LZ77 compression algorithm is proposed based on the characteristics of Nexus trace data. This thesis presents a hardware implementation of LZ77 encoder described in Very High Speed Integrated Circuit Hardware Description Language (VHDL). Test results demonstrate that the compression speed can achieve16 bits/clock cycle and the average compression ratio is 1.35 for the minimal hardware cost case, which is a suitable trade-off between the hardware cost and the compression performances effectively

    A flexible hardware architecture for 2-D discrete wavelet transform: design and FPGA implementation

    Get PDF
    The Discrete Wavelet Transform (DWT) is a powerful signal processing tool that has recently gained widespread acceptance in the field of digital image processing. The multiresolution analysis provided by the DWT addresses the shortcomings of the Fourier Transform and its derivatives. The DWT has proven useful in the area of image compression where it replaces the Discrete Cosine Transform (DCT) in new JPEG2000 and MPEG4 image and video compression standards. The Cohen-Daubechies-Feauveau (CDF) 5/3 and CDF 9/7 DWTs are used for reversible lossless and irreversible lossy compression encoders in the JPEG2000 standard respectively. The design and implementation of a flexible hardware architecture for the 2-D DWT is presented in this thesis. This architecture can be configured to perform both the forward and inverse DWT for any DWTfamily, using fixed-point arithmetic and no auxiliary memory. The Lifting Scheme method is used to perform the DWT instead of the less efficient convolution-based methods. The DWT core is modeled using MATLAB and highly parameterized VHDL. The VHDL model is synthesized to a Xilinx FPGA to prove hardware functionality. The CDF 5/3 and CDF 9/7 versions of the DWT are both modeled and used as comparisons throughout this thesis. The DWT core is used in conjunction with a very simple image denoising module to demonstrate the potential of the DWT core to perform image processing techniques. The CDF 5/3 hardware produces identical results to its theoretical MATLAB model. The fixed point CDF 9/7 deviates very slightly from its floating-point MATLAB model with a ~59dB PSNR deviation for nine levels of DWT decomposition. The execution time for performing both DWTs is nearly identical at -14 clock cycles per image pixel for one level of DWT decomposition. The hardware area generated for the CDF 5/3 is -16,000 gates using only 5% of the Xilinx FPGA hardware area, 2.185 MHz maximum clock speed and 24 mW power consumption. The simple wavelet image denoising techniques resulted in cleaned images up to -27 PSNR

    Efficient architectures of heterogeneous fpga-gpu for 3-d medical image compression

    Get PDF
    The advent of development in three-dimensional (3-D) imaging modalities have generated a massive amount of volumetric data in 3-D images such as magnetic resonance imaging (MRI), computed tomography (CT), positron emission tomography (PET), and ultrasound (US). Existing survey reveals the presence of a huge gap for further research in exploiting reconfigurable computing for 3-D medical image compression. This research proposes an FPGA based co-processing solution to accelerate the mentioned medical imaging system. The HWT block implemented on the sbRIO-9632 FPGA board is Spartan 3 (XC3S2000) chip prototyping board. Analysis and performance evaluation of the 3-D images were been conducted. Furthermore, a novel architecture of context-based adaptive binary arithmetic coder (CABAC) is the advanced entropy coding tool employed by main and higher profiles of H.264/AVC. This research focuses on GPU implementation of CABAC and comparative study of discrete wavelet transform (DWT) and without DWT for 3-D medical image compression systems. Implementation results on MRI and CT images, showing GPU significantly outperforming single-threaded CPU implementation. Overall, CT and MRI modalities with DWT outperform in term of compression ratio, peak signal to noise ratio (PSNR) and latency compared with images without DWT process. For heterogeneous computing, MRI images with various sizes and format, such as JPEG and DICOM was implemented. Evaluation results are shown for each memory iteration, transfer sizes from GPU to CPU consuming more bandwidth or throughput. For size 786, 486 bytes JPEG format, both directions consumed bandwidth tend to balance. Bandwidth is relative to the transfer size, the larger sizing will take more latency and throughput. Next, OpenCL implementation for concurrent task via dedicated FPGA. Finding from implementation reveals, OpenCL on batch procession mode with AOC techniques offers substantial results where the amount of logic, area, register and memory increased proportionally to the number of batch. It is because of the kernel will copy the kernel block refer to batch number. Therefore memory bank increased periodically related to kernel block. It was found through comparative study that the tree balance and unroll loop architecture provides better achievement, in term of local memory, latency and throughput

    Image compression techniques using vector quantization

    Get PDF
    corecore