955 research outputs found

    Image Coding by Multi-Step, Adaptive Flux Interpolation

    Get PDF
    This paper describes and discusses a new technique, the multi-step adaptive flux interpolation (MAFI) and its application to image data for coding. The output of MAFI, when applied to an image, is still in an image form but has a more uniform feature density. This is because the original image has been warped by removing those rows and columns which contain mostly redundant pixels. It is also greatly reduced in size and the side information is minimal. The MAFI output can be further compressed using conventional coders, making its compression ratio even higher. Because of its warped nature, the MAFI output's statistics are also more consistent with the properties assumed by block-based discrete cosine transform (DCT) models

    SIM-DSP: A DSP-Enhanced CAD Platform for Signal Integrity Macromodeling and Simulation

    Get PDF
    Macromodeling-Simulation process for signal integrity verifications has become necessary for the high speed circuit system design. This paper aims to introduce a “VLSI Signal Integrity Macromodeling and Simulation via Digital Signal Processing Techniques” framework (known as SIM-DSP framework), which applies digital signal processing techniques to facilitate the SI verification process in the pre-layout design phase. Core identification modules and peripheral (pre-/post-)processing modules have been developed and assembled to form a verification flow. In particular, a single-step discrete cosine transform truncation (DCTT) module has been developed for modeling-simulation process. In DCTT, the response modeling problem is classified as a signal compression problem, wherein the system response can be represented by a truncated set of non-pole based DCT bases, and error can be analyzed through Parseval’s theorem. Practical examples are given to show the applicability of our proposed framework

    Graph Spectral Image Processing

    Full text link
    Recent advent of graph signal processing (GSP) has spurred intensive studies of signals that live naturally on irregular data kernels described by graphs (e.g., social networks, wireless sensor networks). Though a digital image contains pixels that reside on a regularly sampled 2D grid, if one can design an appropriate underlying graph connecting pixels with weights that reflect the image structure, then one can interpret the image (or image patch) as a signal on a graph, and apply GSP tools for processing and analysis of the signal in graph spectral domain. In this article, we overview recent graph spectral techniques in GSP specifically for image / video processing. The topics covered include image compression, image restoration, image filtering and image segmentation

    Perceptual Zero-Tree Coding with Efficient Optimization for Embedded Platforms

    Get PDF
    This study proposes a block-edge-based perceptual zero-tree coding (PZTC) method, which is implemented with efficientoptimization on the embedded platform. PZTC combines two novel compression concepts for coding efficiency and quality:block-edge detection (BED) and the low-complexity and low-memory entropy coder (LLEC). The proposed PZTC wasimplemented as a fixed-point version and optimized on the DSP-based platform based on both the presented platformindependentand platform-dependent optimization technologies. For platform-dependent optimization, this study examinesthe fixed-point PZTC and analyzes the complexity to optimize PZTC toward achieving an optimal coding efficiency.Furthermore, hardware-based platform-dependent optimizations are presented to reduce the memory size. Theperformance, such as compression quality and efficiency, is validated by experimental results

    RGB Medical Video Compression Using Geometric Wavelet

    Get PDF
    The video compression is used in a wide of applications from medical domain especially in telemedicine. Compared to the classical transforms, wavelet transform has significantly better performance in horizontal, vertical and diagonal directions. Therefore, this transform introduces high discontinuities in complex geometrics. However, to detect complex geometrics is one key challenge for the high efficient compression. In order to capture anisotropic regularity along various curves a new efficient and precise transform termed by bandelet basis, based on DWT, quadtree decomposition and optical flow is proposed in this paper. To encode significant coefficients we use efficient coder SPIHT. The experimental results show that the proposed algorithm DBT-SPIHT for low bit rate (0.3Mbps) is able to reduce up to 37.19% and 28.20% of the complex geometrics detection compared to the DWT-SPIHT and DCuT-SPIHT algorithm

    AN INVESTIGATION OF DIFFERENT VIDEO WATERMARKING TECHNIQUES

    Get PDF
    Watermarking is an advanced technology that identifies to solve the problem of illegal manipulation and distribution of digital data. It is the art of hiding the copyright information into host such that the embedded data is imperceptible. The covers in the forms of digital multimedia object, namely image, audio and video. The extensive literature collected related to the performance improvement of video watermarking techniques is critically reviewed and presented in this paper. Also, comprehensive review of the literature on the evolution of various video watermarking techniques to achieve robustness and to maintain the quality of watermarked video sequences

    Application of Bandelet Transform in Image and Video Compression

    Get PDF
    The need for large-scale storage and transmission of data is growing exponentially With the widespread use of computers so that efficient ways of storing data have become important. With the advancement of technology, the world has found itself amid a vast amount of information. An efficient method has to be generated to deal with such amount of information. Data compression is a technique which minimizes the size of a file keeping the quality same as previous. So more amount of data can be stored in memory space with the help of data compression. There are various image compression standards such as JPEG, which uses discrete cosine transform technique and JPEG 2000 which uses discrete wavelet transform technique. The discrete cosine transform gives excellent compaction for highly correlated information. The computational complexity is very less as it has better information packing ability. However, it produces blocking artifacts, graininess, and blurring in the output which is overcome by the discrete wavelet transform. The image size is reduced by discarding values less than a prespecified quantity without losing much information. But it also has some limitations when the complexity of the image increases. Wavelets are optimal for point singularity however for line singularities and curve singularities these are not optimal. They do not consider the image geometry which is a vital source of redundancy. Here we analyze a new type of bases known as bandelets which can be constructed from the wavelet basis which takes an important source of regularity that is the geometrical redundancy.The image is decomposed along the direction of geometry. It is better as compared to other methods because the geometry is described by a flow vector rather than edges. it indicates the direction in which the intensity of image shows a smooth variation. It gives better compression measure compared to wavelet bases. A fast subband coding is used for the image decomposition in a bandelet basis. It has been extended for video compression. The bandelet transform based image and video compression method compared with the corresponding wavelet scheme. Different performance measure parameters such as peak signal to noise ratio, compression ratio (PSNR), bits per pixel (bpp) and entropy are evaluated for both Image and video compression

    A Multiresolution Census Algorithm for Calculating Vortex Statistics in Turbulent Flows

    Full text link
    The fundamental equations that model turbulent flow do not provide much insight into the size and shape of observed turbulent structures. We investigate the efficient and accurate representation of structures in two-dimensional turbulence by applying statistical models directly to the simulated vorticity field. Rather than extract the coherent portion of the image from the background variation, as in the classical signal-plus-noise model, we present a model for individual vortices using the non-decimated discrete wavelet transform. A template image, supplied by the user, provides the features to be extracted from the vorticity field. By transforming the vortex template into the wavelet domain, specific characteristics present in the template, such as size and symmetry, are broken down into components associated with spatial frequencies. Multivariate multiple linear regression is used to fit the vortex template to the vorticity field in the wavelet domain. Since all levels of the template decomposition may be used to model each level in the field decomposition, the resulting model need not be identical to the template. Application to a vortex census algorithm that records quantities of interest (such as size, peak amplitude, circulation, etc.) as the vorticity field evolves is given. The multiresolution census algorithm extracts coherent structures of all shapes and sizes in simulated vorticity fields and is able to reproduce known physical scaling laws when processing a set of voriticity fields that evolve over time

    Nonuniform Fast Fourier Transforms Using Min-Max Interpolation

    Full text link
    The fast Fourier transform (FFT) is used widely in signal processing for efficient computation of the FT of finite-length signals over a set of uniformly spaced frequency locations. However, in many applications, one requires nonuniform sampling in the frequency domain, i.e., a nonuniform FT. Several papers have described fast approximations for the nonuniform FT based on interpolating an oversampled FFT. This paper presents an interpolation method for the nonuniform FT that is optimal in the min-max sense of minimizing the worst-case approximation error over all signals of unit norm. The proposed method easily generalizes to multidimensional signals. Numerical results show that the min-max approach provides substantially lower approximation errors than conventional interpolation methods. The min-max criterion is also useful for optimizing the parameters of interpolation kernels such as the Kaiser-Bessel function.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/85840/1/Fessler70.pd
    corecore