10 research outputs found

    Tchebichef Moment Based Hilbert Scan for Image Compression

    Get PDF
    Image compression is now essential for applications such as transmission and storage in data base, so we need to compress a vast amount of information whereas, the compressed ratio and quality of compressed image must be enhanced, for this reason, this paper develop a new algorithm that used a discrete orthogonal Tchebichef moment based Hilbert curve for image compression. The analyzed image was divided into 8×8 image sub-blocks, the Tchebichef moment has been applied to each one, and then the transformed coefficients 8×8 sub-block shall be reordered in Hilbert scan into a linear array, at this step Huffman coding is implemented. Experimental results show that this algorithm improves the coding efficiency on the one hand; and on the other hand the quality of reconstructed image is also not significantly decreased. Keywords: Huffman Coding, Tchebichef Moment Transforms, Orthogonal Moment Functions, Hilbert, zigzag scan

    Spectrum Analysis of Speech Recognition via Discrete Tchebichef Transform

    Get PDF
    Speech recognition is still a growing field. It carries strong potential in the near future as computing power grows. Spectrum analysis is an elementary operation in speech recognition. Fast Fourier Transform (FFT) is the traditional technique to analyze frequency spectrum of the signal in speech recognition. Speech recognition operation requires heavy computation due to large samples per window. In addition, FFT consists of complex field computing. This paper proposes an approach based on discrete orthonormal Tchebichef polynomials to analyze a vowel and a consonant in spectral frequency for speech recognition. The Discrete Tchebichef Transform (DTT) is used instead of popular FFT. The preliminary experimental results show that DTT has the potential to be a simpler and faster transformation for speech recognition

    Discrete Tchebichef transform and its application to image / video compression

    Get PDF
    The discrete Tchebichef transform (DTT) is a novel polynomial-based orthogonal transform. It exhibits interesting properties, such as high energy compaction, optimal decorrelation and direct orthogonality, and hence is expected to produce good transform coding results. Advances in the areas of image and video coding have generated a growing interest in discrete transforms. The demand for high quality with a limited use of computational resources and improved cost benefits has lead to experimentation with novel transform coding methods. One such experiment is undertaken in this thesis with the DTT. We propose the integer Tchebichef transform (ITT) for 4x4 and 8x8 DTTs. Using the proposed ITT, we also design fast multiplier-free algorithms for 4-point and 8-point DTTs that are superior to the existing algorithms. We perform image compression using 4 {604} 4 and 8 {604} 8 DTT. In order to analyze the performance of DTT, we compare the image compression results of DTT, discrete cosine transform (DCT) and integer cosine transform (ICT). Image quality measures that span both the subjective and objective evaluation techniques are computed for the compressed images and the results analyzed taking into account the statistical properties of the images for a better understanding of the behavioral trends. Substantial improvement is observed in the quality of DTT-compressed images. The appealing characteristics of DTT motivate us to take a step further to evaluate the computational benefits of ITT over ICT, which is currently being used in the H.264/AVC standard. The merits of DTT as demonstrated in this thesis are its simplicity, good image compression potential and computational efficiency, further enhanced by its low precision requirement

    Tchebichef image watermarking along the edge using YCoCg-R color space for copyright protection

    Get PDF
    Easy creation and manipulation of digital images present the potential danger of counterfeiting and forgery. Watermarking technique which embeds a watermark into the images can be used to overcome these problems and to provide copyright protection. Digital image watermarking should meet requirements, e.g. maintain image quality, difficult to remove the watermark, quality of watermark extraction, and applicable. This research proposes Tchebichef watermarking along the edge based on YCoCg-R color space. The embedding region is selected by considering the human visual characteristics (HVC) entropy. The selected blocks with minimum of HVC entropy values are transformed by Tchebichef moments. The locations of C(0,1), C(1,0), C(0,2) and C(2,0) of the matrix moment are randomly embedded for each watermark bit. The proposed watermarking scheme produces a good imperceptibility by average SSIM value around 0.98. The watermark recovery has greater resistant after several types of attack than other schemes. © 2019 Institute of Advanced Engineering and Science. All rights reserved

    Spectral Test via Discrete Tchebichef Transform for Randomness

    Get PDF
    Random key plays essential roles in cryptography. NIST statistical test suite for randomness is the most comprehensive set of random tests. It has been popular and used as a benchmark test for randomness. One of the random tests is spectral test. There has been some serious problem in spectral test as pointed out by few researchers. In this paper, an alternative test shall be proposed to replace the spectral test. The distribution of discrete orthonormal Tchebichef transform has been obtained based on computational observation being made on random noise. A recommendation on the new random test setting for short cryptographic keys shall also be made

    A Psychovisual Model based on Discrete Orthonormal Transform

    Get PDF
    Discrete Orthonormal Transform has been a basis for digital image processing. The lesser coefficients of a Discrete Orthonormal Transform to reconstruct an image is the more compact support the Discrete Orthonormal Transform provides to an image. Tchebychev Moment Transform has been shown to provide a more compact support to an image than the popular Discrete Cosine Transform. This paper will investigate the contribution of each coefficient of the Discrete Orthonormal Transform to the image reconstruction. The error threshold in image reconstruction will be the primitive of Psychovisual Model to an image. An experimental result shall show that the Psychovisual Model will provide a statistically efficient error threshold for image reconstruction

    Development of Novel Image Compression Algorithms for Portable Multimedia Applications

    Get PDF
    Portable multimedia devices such as digital camera, mobile d evices, personal digtal assistants (PDAs), etc. have limited memory, battery life and processing power. Real time processing and transmission using these devices requires image compression algorithms that can compress efficiently with reduced complexity. Due to limited resources, it is not always possible to implement the best algorithms inside these devices. In uncompressed form, both raw and image data occupy an unreasonably large space. However, both raw and image data have a significant amount of statistical and visual redundancy. Consequently, the used storage space can be efficiently reduced by compression. In this thesis, some novel low complexity and embedded image compression algorithms are developed especially suitable for low bit rate image compression using these devices. Despite the rapid progress in the Internet and multimedia technology, demand for data storage and data transmission bandwidth continues to outstrip the capabil- ities of available technology. The browsing of images over In ternet from the image data sets using these devices requires fast encoding and decodin g speed with better rate-distortion performance. With progressive picture build up of the wavelet based coded images, the recent multimedia applications demand goo d quality images at the earlier stages of transmission. This is particularly important if the image is browsed over wireless lines where limited channel capacity, storage and computation are the deciding parameters. Unfortunately, the performance of JPEG codec degrades at low bit rates because of underlying block based DCT transforms. Altho ugh wavelet based codecs provide substantial improvements in progressive picture quality at lower bit rates, these coders do not fully exploit the coding performance at lower bit rates. It is evident from the statistics of transformed images that the number of significant coefficients having magnitude higher than earlier thresholds are very few. These wavelet based codecs code zero to each insignificant subband as it moves from coarsest to finest subbands. It is also demonstrated that there could be six to sev en bit plane passes where wavelet coders encode many zeros as many subbands are likely to be insignificant with respect to early thresholds. Bits indicating insignificance of a coefficient or subband are required, but they don’t code information that reduces distortion of the reconstructed image. This leads to reduction of zero distortion for an increase in non zero bit-rate. Another problem associated with wavelet based coders such as Set partitioning in hierarchical trees (SPIHT), Set partitioning embedded block (SPECK), Wavelet block-tree coding (WBTC) is because of the use of auxiliary lists. The size of list data structures increase exponentially as more and more eleme nts are added, removed or moved in each bitplane pass. This increases the dynamic memory requirement of the codec, which is a less efficient feature for hardware implementations. Later, many listless variants of SPIHT and SPECK, e.g. No list SPIHT (NLS) and Listless SPECK (LSK) respectively are developed. However, these algorithms have similar rate distortion performances, like the list based coders. An improved LSK (ILSK)algorithm proposed in this dissertation that improves the low b it rate performance of LSK by encoding much lesser number of symbols (i.e. zeros) to several insignificant subbands. Further, the ILSK is combined with a block based transform known as discrete Tchebichef transform (DTT). The proposed new coder isnamed as Hierar-chical listless DTT (HLDTT). DTT is chosen over DCT because of it’s similar energy compaction property like discrete cosine transform (DCT). It is demonstrated that the decoded image quality using HLDTT has better visual performance (i.e., Mean Structural Similarity) than the images decoded using DCT based embedded coders in most of the bit rates. The ILSK algorithm is also combined with Lift based wavelet tra nsform to show the superiority over JPEG2000 at lower rates in terms of peak signal-to-noise ratio (PSNR). A full-scalable and random access decodable listless algorithm is also developed which is based on lift based ILSK. The proposed algorithm named as scalable listless embedded block partitioning (S-LEBP) generates bit stream that offer increasing signal-to-noise ratio and spatial resolution. These are very useful features for transmission of images in a heterogeneous network that optimally service each user according to available bandwidth and computing needs. Random access decoding is a very useful feature for extracting/manipulating certain ar ea of an image with minimal decoding work. The idea used in ILSK is also extended to encode and decode color images. The proposed algorithm for coding color images is named as Color listless embedded block partitioning (CLEBP) algorithm. The coding efficiency of CLEBP is compared with Color SPIHT (CSPIHT) and color variant of WBTC algorithm. From the simulation results, it is shown that CLEBP exhibits a significant PSNR performance improvement over the later two algorithms on various types of images. Although many modifications to NLS and LSK have been made, the listless modification to WBTC algorithm has not been reported in the literature. Therefore,a listless variant of WBTC (named as LBTC) algorithm is proposed. LBTC not only reduces the memory requirement by 88-89% but also increases the encoding and decoding speed, while preserving the rate-distortion perform ance at the same time. Further, the combination of DCT with LBTC (named as DCT LBT) and DTT with LBTC (named as Hierarchical listless DTT, HLBTDTT) are compared with some state-of-the-art DCT based embedded coders. It is also shown that the proposed DCT-LBT and HLBT-DTT show significant PSNR improvements over almost all the embedded coders in most of the bit rates. In some multimedia applications e.g., digital camera, camco rders etc., the images always need to have a fixed pre-determined high quality. The extra effort required for quality scalability is wasted. Therefore, non-embedded algo rithms are best suited for these applications. The proposed algorithms can be made non-embedded by encoding a fixed set of bit planes at a time. Instead, a sparse orthogonal transform matrix is proposed, which can be integrated in a JEPG baseline coder. The proposed matrix promises a substantial reduction in hardware complexity with amarginal loss of image quality on a considerable range of bit rates than block based DCT or Integer DCT

    Three layer authentications with a spiral block mapping to prove authenticity in medical images

    Get PDF
    Digital medical image has a potential to be manipulated by unauthorized persons due to advanced communication technology. Verifying integrity and authenticity have become important issues on the medical image. This paper proposed a self-embedding watermark using a spiral block mapping for tamper detection and restoration. The block-based coding with the size of 3x3 was applied to perform selfembedding watermark with two authentication bits and seven recovery bits. The authentication bits are obtained from a set of condition between sub-block and block image, and the parity bits of each sub-block. The authentication bits and the recovery bits are embedded in the least significant bits using the proposed spiral block mapping. The recovery bits are embedded into different sub-blocks based on a spiral block mapping. The watermarked images were tested under various tampered images such as blurred image, unsharp-masking, copy-move, mosaic, noise, removal, and sharpening. The experimental results show that the scheme achieved a PSNR value of about 51.29 dB and a SSIM value of about 0.994 on the watermarked image. The scheme showed tamper localization with accuracy of 93.8%. In addition, the proposed scheme does not require external information to perform recovery bits. The proposed scheme was able to recover the tampered image with a PSNR value of 40.45 dB and a SSIM value of 0.994

    Approximate and timing-speculative hardware design for high-performance and energy-efficient video processing

    Get PDF
    Since the end of transistor scaling in 2-D appeared on the horizon, innovative circuit design paradigms have been on the rise to go beyond the well-established and ultraconservative exact computing. Many compute-intensive applications – such as video processing – exhibit an intrinsic error resilience and do not necessarily require perfect accuracy in their numerical operations. Approximate computing (AxC) is emerging as a design alternative to improve the performance and energy-efficiency requirements for many applications by trading its intrinsic error tolerance with algorithm and circuit efficiency. Exact computing also imposes a worst-case timing to the conventional design of hardware accelerators to ensure reliability, leading to an efficiency loss. Conversely, the timing-speculative (TS) hardware design paradigm allows increasing the frequency or decreasing the voltage beyond the limits determined by static timing analysis (STA), thereby narrowing pessimistic safety margins that conventional design methods implement to prevent hardware timing errors. Timing errors should be evaluated by an accurate gate-level simulation, but a significant gap remains: How these timing errors propagate from the underlying hardware all the way up to the entire algorithm behavior, where they just may degrade the performance and quality of service of the application at stake? This thesis tackles this issue by developing and demonstrating a cross-layer framework capable of performing investigations of both AxC (i.e., from approximate arithmetic operators, approximate synthesis, gate-level pruning) and TS hardware design (i.e., from voltage over-scaling, frequency over-clocking, temperature rising, and device aging). The cross-layer framework can simulate both timing errors and logic errors at the gate-level by crossing them dynamically, linking the hardware result with the algorithm-level, and vice versa during the evolution of the application’s runtime. Existing frameworks perform investigations of AxC and TS techniques at circuit-level (i.e., at the output of the accelerator) agnostic to the ultimate impact at the application level (i.e., where the impact is truly manifested), leading to less optimization. Unlike state of the art, the framework proposed offers a holistic approach to assessing the tradeoff of AxC and TS techniques at the application-level. This framework maximizes energy efficiency and performance by identifying the maximum approximation levels at the application level to fulfill the required good enough quality. This thesis evaluates the framework with an 8-way SAD (Sum of Absolute Differences) hardware accelerator operating into an HEVC encoder as a case study. Application-level results showed that the SAD based on the approximate adders achieve savings of up to 45% of energy/operation with an increase of only 1.9% in BD-BR. On the other hand, VOS (Voltage Over-Scaling) applied to the SAD generates savings of up to 16.5% in energy/operation with around 6% of increase in BD-BR. The framework also reveals that the boost of about 6.96% (at 50°) to 17.41% (at 75° with 10- Y aging) in the maximum clock frequency achieved with TS hardware design is totally lost by the processing overhead from 8.06% to 46.96% when choosing an unreliable algorithm to the blocking match algorithm (BMA). We also show that the overhead can be avoided by adopting a reliable BMA. This thesis also shows approximate DTT (Discrete Tchebichef Transform) hardware proposals by exploring a transform matrix approximation, truncation and pruning. The results show that the approximate DTT hardware proposal increases the maximum frequency up to 64%, minimizes the circuit area in up to 43.6%, and saves up to 65.4% in power dissipation. The DTT proposal mapped for FPGA shows an increase of up to 58.9% on the maximum frequency and savings of about 28.7% and 32.2% on slices and dynamic power, respectively compared with stat

    Pattern Recognition

    Get PDF
    A wealth of advanced pattern recognition algorithms are emerging from the interdiscipline between technologies of effective visual features and the human-brain cognition process. Effective visual features are made possible through the rapid developments in appropriate sensor equipments, novel filter designs, and viable information processing architectures. While the understanding of human-brain cognition process broadens the way in which the computer can perform pattern recognition tasks. The present book is intended to collect representative researches around the globe focusing on low-level vision, filter design, features and image descriptors, data mining and analysis, and biologically inspired algorithms. The 27 chapters coved in this book disclose recent advances and new ideas in promoting the techniques, technology and applications of pattern recognition
    corecore