2,124 research outputs found

    A statistical reduced-reference method for color image quality assessment

    Full text link
    Although color is a fundamental feature of human visual perception, it has been largely unexplored in the reduced-reference (RR) image quality assessment (IQA) schemes. In this paper, we propose a natural scene statistic (NSS) method, which efficiently uses this information. It is based on the statistical deviation between the steerable pyramid coefficients of the reference color image and the degraded one. We propose and analyze the multivariate generalized Gaussian distribution (MGGD) to model the underlying statistics. In order to quantify the degradation, we develop and evaluate two measures based respectively on the Geodesic distance between two MGGDs and on the closed-form of the Kullback Leibler divergence. We performed an extensive evaluation of both metrics in various color spaces (RGB, HSV, CIELAB and YCrCb) using the TID 2008 benchmark and the FRTV Phase I validation process. Experimental results demonstrate the effectiveness of the proposed framework to achieve a good consistency with human visual perception. Furthermore, the best configuration is obtained with CIELAB color space associated to KLD deviation measure

    A Review on Block Matching Motion Estimation and Automata Theory based Approaches for Fractal Coding

    Get PDF
    Fractal compression is the lossy compression technique in the field of gray/color image and video compression. It gives high compression ratio, better image quality with fast decoding time but improvement in encoding time is a challenge. This review paper/article presents the analysis of most significant existing approaches in the field of fractal based gray/color images and video compression, different block matching motion estimation approaches for finding out the motion vectors in a frame based on inter-frame coding and intra-frame coding i.e. individual frame coding and automata theory based coding approaches to represent an image/sequence of images. Though different review papers exist related to fractal coding, this paper is different in many sense. One can develop the new shape pattern for motion estimation and modify the existing block matching motion estimation with automata coding to explore the fractal compression technique with specific focus on reducing the encoding time and achieving better image/video reconstruction quality. This paper is useful for the beginners in the domain of video compression

    Modified Three-Step Search Block Matching Motion Estimation and Weighted Finite Automata based Fractal Video Compression

    Get PDF
    The major challenge with fractal image/video coding technique is that, it requires more encoding time. Therefore, how to reduce the encoding time is the research component remains in the fractal coding. Block matching motion estimation algorithms are used, to reduce the computations performed in the process of encoding. The objective of the proposed work is to develop an approach for video coding using modified three step search (MTSS) block matching algorithm and weighted finite automata (WFA) coding with a specific focus on reducing the encoding time. The MTSS block matching algorithm are used for computing motion vectors between the two frames i.e. displacement of pixels and WFA is used for the coding as it behaves like the Fractal Coding (FC). WFA represents an image (frame or motion compensated prediction error) based on the idea of fractal that the image has self-similarity in itself. The self-similarity is sought from the symmetry of an image, so the encoding algorithm divides an image into multi-levels of quad-tree segmentations and creates an automaton from the sub-images. The proposed MTSS block matching algorithm is based on the combination of rectangular and hexagonal search pattern and compared with the existing New Three-Step Search (NTSS), Three-Step Search (TSS), and Efficient Three-Step Search (ETSS) block matching estimation algorithm. The performance of the proposed MTSS block matching algorithm is evaluated on the basis of performance evaluation parameters i.e. mean absolute difference (MAD) and average search points required per frame. Mean of absolute difference (MAD) distortion function is used as the block distortion measure (BDM). Finally, developed approaches namely, MTSS and WFA, MTSS and FC, and Plane FC (applied on every frame) are compared with each other. The experimentations are carried out on the standard uncompressed video databases, namely, akiyo, bus, mobile, suzie, traffic, football, soccer, ice etc. Developed approaches are compared on the basis of performance evaluation parameters, namely, encoding time, decoding time, compression ratio and Peak Signal to Noise Ratio (PSNR). The video compression using MTSS and WFA coding performs better than MTSS and fractal coding, and frame by frame fractal coding in terms of achieving reduced encoding time and better quality of video

    Digital image compression

    Get PDF

    Investigation of the effects of image compression on the geometric quality of digital protogrammetric imagery

    Get PDF
    We are living in a decade, where the use of digital images is becoming increasingly important. Photographs are now converted into digital form, and direct acquisition of digital images is becoming increasing important as sensors and associated electronics. Unlike images in analogue form, digital representation of images allows visual information to· be easily manipulated in useful ways. One practical problem of the digital image representation is that, it requires a very large number of bits and hence one encounters a fairly large volume of data in a digital production environment if they are stored uncompressed on the disk. With the rapid advances in sensor technology and digital electronics, the number of bits grow larger in softcopy photogrammetry, remote sensing and multimedia GIS. As a result, it is desirable to find efficient representation for digital images in order to reduce the memory required for storage, improve the data access rate from storage devices, and reduce the time required for transfer across communication channels. The component of digital image processing that deals with this problem is called image compression. Image compression is a necessity for the utilisation of large digital images in softcopy photogrammetry, remote sensing, and multimedia GIS. Numerous image Compression standards exist today with the common goal of reducing the number of bits needed to store images, and to facilitate the interchange of compressed image data between various devices and applications. JPEG image compression standard is one alternative for carrying out the image compression task. This standard was formed under the auspices ISO and CCITT for the purpose of developing an international standard for the compression and decompression of continuous-tone, still-frame, monochrome and colour images. The JPEG standard algorithm &Us into three general categories: the baseline sequential process that provides a simple and efficient algorithm for most image coding applications, the extended DCT-based process that allows the baseline system to satisfy a broader range of applications, and an independent lossless process for application demanding that type of compression. This thesis experimentally investigates the geometric degradations resulting from lossy JPEG compression on photogrammetric imagery at various levels of quality factors. The effects and the suitability of JPEG lossy image compression on industrial photogrammetric imagery are investigated. Examples are drawn from the extraction of targets in close-range photogrammetric imagery. In the experiments, the JPEG was used to compress and decompress a set of test images. The algorithm has been tested on digital images containing various levels of entropy (a measure of information content of an image) with different image capture capabilities. Residual data was obtained by taking the pixel-by-pixel difference between the original data and the reconstructed data. The image quality measure, root mean square (rms) error of the residual was used as a quality measure to judge the quality of images produced by JPEG(DCT-based) image compression technique. Two techniques, TIFF (IZW) compression and JPEG(DCT-based) compression are compared with respect to compression ratios achieved. JPEG(DCT-based) yields better compression ratios, and it seems to be a good choice for image compression. Further in the investigation, it is found out that, for grey-scale images, the best compression ratios were obtained when the quality factors between 60 and 90 were used (i.e., at a compression ratio of 1:10 to 1:20). At these quality factors the reconstructed data has virtually no degradation in the visual and geometric quality for the application at hand. Recently, many fast and efficient image file formats have also been developed to store, organise and display images in an efficient way. Almost every image file format incorporates some kind of compression method to manage data within common place networks and storage devices. The current major file formats used in softcopy photogrammetry, remote sensing and · multimedia GIS. were also investigated. It was also found out that the choice of a particular image file format for a given application generally involves several interdependent considerations including quality; flexibility; computation; storage, or transmission. The suitability of a file format for a given purpose is · best determined by knowing its original purpose. Some of these are widely used (e.g., TIFF, JPEG) and serve as exchange formats. Others are adapted to the needs of particular applications or particular operating systems

    Image Compression by Wavelet Transform.

    Get PDF
    Digital images are widely used in computer applications. Uncompressed digital images require considerable storage capacity and transmission bandwidth. Efficient image compression solutions are becoming more critical with the recent growth of data intensive, multimedia-based web applications. This thesis studies image compression with wavelet transforms. As a necessary background, the basic concepts of graphical image storage and currently used compression algorithms are discussed. The mathematical properties of several types of wavelets, including Haar, Daubechies, and biorthogonal spline wavelets are covered and the Enbedded Zerotree Wavelet (EZW) coding algorithm is introduced. The last part of the thesis analyzes the compression results to compare the wavelet types
    corecore