9 research outputs found

    Image Compression And Retrieval Using Prediction And Wavelet - Based Techniques

    No full text
    The topic of image compression and retrieval has become one of the most researched areas in the recent years due to the acute demand for storage and transmission of large volume of image data that are generated in the Internet and other applications. When compressing an image, it is necessary to satisfy two conflicting requirements, namely, compression ratio (CR) and the image quality which is usually measured by the parameter, peak signal-to- noise ratio (PSNR). In this thesis, several lossless and lossy image compression techniques as well as an integrated image retrieval system are proposed using prediction and wavelet based techniques. Employing prediction errors instead of the actual image pixels for compression and retrieval processes ensures data security. A lossless algorithm (LLA) is proposed which uses neural network predictors and entropy encoding. Classification is performed as a pre-processing step to improve the compression ratio. For this purpose, classification algorithm1(CL1) and classification algorithm2(CL2) which make use of wavelet based contourlet transform coefficients and Fourier descriptors as features are proposed. Two identical artificial neural networks (ANNs) are employed at the compression (sending) and decompression (receiving) sides to carry out the prediction. The prediction error which is the difference between the original and the predicted pixel values is used instead of the actual image pixels. The prediction is performed in a lossless manner by rounding-off the predicted values to the nearest integer values at both sides

    Lossy image compression based on prediction error and vector quantisation

    No full text
    Abstract Lossy image compression has been gaining importance in recent years due to the enormous increase in the volume of image data employed for Internet and other applications. In a lossy compression, it is essential to ensure that the compression process does not affect the quality of the image adversely. The performance of a lossy compression algorithm is evaluated based on two conflicting parameters, namely, compression ratio and image quality which is usually measured by PSNR values. In this paper, a new lossy compression method denoted as PE-VQ method is proposed which employs prediction error and vector quantization (VQ) concepts. An optimum codebook is generated by using a combination of two algorithms, namely, artificial bee colony and genetic algorithms. The performance of the proposed PE-VQ method is evaluated in terms of compression ratio (CR) and PSNR values using three different types of databases, namely, CLEF med 2009, Corel 1 k and standard images (Lena, Barbara etc.). Experiments are conducted for different codebook sizes and for different CR values. The results show that for a given CR, the proposed PE-VQ technique yields higher PSNR value compared to the existing algorithms. It is also shown that higher PSNR values can be obtained by applying VQ on prediction errors rather than on the original image pixels

    Lossy Image Compression Based on Vector Quantization Using Artificial Bee Colony and Genetic Algorithms

    No full text
    In recent years, the volume of image data that are being employed for Internet and other applications has been increasing at an enormous rate. To cope up with the existing limitations on the storage space and the network bandwidth, it has become necessary to develop more efficient compression techniques. Lossy compression is more popular compared to lossless compression as it is more widely used in a variety of applications. In lossy compression, it is necessary to maintain the quality of the reconstructed image when the compression scheme is applied. Thus, compression ratio and the reconstructed image quality are the two important parameters based on which the performance of a lossy compression scheme is judged. In this paper, a new lossy compression scheme is proposed which employs codebook concept. For the generation of the codebook, a new technique denoted as ABC-GA technique which is a combination of artificial bee colony and genetic algorithms is employed. The performance of the proposed compression scheme is evaluated using two different types of databases, namely, CLEF med 2009 and standard images (Lena, Barbara etc.). The experimental results show that the proposed technique performs better than the existing algorithms yielding average PSNR values of 43.05, 41.58, 40.06, 37.41, 35.24 for compression ratios 10, 20, 40, 60, 80 respectively in the case of standard images

    CBIR System Based On Prediction Errors

    No full text
    Content-Based Image Retrieval (CBIR) systems are widely used for local as well as for remote applications such as telemedicine, satellite image transmission and image search engines. The existing CBIR systems suffer from the limitations of storage space, data security and bandwidth requirement. To overcome these problems, a new method termed as CBIR-PE which makes use of prediction errors instead of actual images for storage, transmission and retrieval is presented. Identical artificial neural networks (ANNs) are employed both at the server and client sides to carry out the prediction. At the server side, only the error database comprising the difference between the original and the predicted pixel values is used instead of the actual image database. The predic-tion errors of the query image are matched with those in the server database to retrieve similar prediction error patterns. These errors are then combined with the predicted val-ues available at the client ANN to reconstruct the actual images. Since only the predic-tion errors are employed, the proposed method is able to solve the problems of storage space, data security and bandwidth requirement. The proposed method is implemented in combination with a clustering technique called WBCT-FCM which makes use of wavelet based contourlet transform (WBCT) and fuzzy c-means (FCM) clustering algorithm. The performances of the proposed WBCT-FCM and CBIR-PE are evaluated using COREL-1k database. The experimental results show that the proposed methods achieve better results with respect to clustering and retrieval accuracies compared to the existing methods

    Feed-Forward Neural Network-Based Predictive Image Coding for Medical Image Compression

    No full text
    The generation of high volume of medical images in recent years has increased the demand for more efficient compression methods to cope up with the storage and transmission problems. In the case of medical images, it is important to ensure that the compression process does not affect the image quality adversely. In this paper, a predictive image coding method is proposed which preserves the quality of the medical image in the diagnostically important region (DIR) even after compression. In this method, the image is initially segmented into two portions, namely, DIR and non-DIR portions using a graph based segmentation procedure. The prediction process is implemented using two identical feed- forward neural networks (FF-NNs) at the compression and decompression stages. Gravitational search and particle swarm algorithms are used for training the FF-NNs. Prediction is performed both in a lossless (LLP) and near lossless (NLLP) manner for evaluating the performances of the two FF-NN training algorithms. The prediction error sequence which is the difference between the actual and predicted pixel values is further compressed using a Markov model based arithmetic coding. The proposed method is tested using CLEF med 2009 database. The experimental results demonstrate that the proposed method is equipped for compressing the medical images with minimum degradation in the image quality. It’s found that the gravitational search method achieves higher PSNR values compared to the particle swarm and backpropagation methods

    Lossy image compression based on prediction error and vector quantisation

    No full text
    Lossy image compression has been gaining importance in recent years due to the enormous increase in the volume of image data employed for Internet and other applications. In a lossy compression, it is essential to ensure that the compression process does not affect the quality of the image adversely. The performance of a lossy compression algorithm is evaluated based on two conflicting parameters, namely, compression ratio and image quality which is usually measured by PSNR values. In this paper, a new lossy compression method denoted as PE-VQ method is proposed which employs prediction error and vector quantization (VQ) concepts. An optimum codebook is generated by using a combination of two algorithms, namely, artificial bee colony and genetic algorithms. The performance of the proposed PE-VQ method is evaluated in terms of compression ratio (CR) and PSNR values using three different types of databases, namely, CLEF med 2009, Corel 1 k and standard images (Lena, Barbara etc.). Experiments are conducted for different codebook sizes and for different CR values. The results show that for a given CR, the proposed PE-VQ technique yields higher PSNR value compared to the existing algorithms. It is also shown that higher PSNR values can be obtained by applying VQ on prediction errors rather than on the original image pixels

    An automated grading system for diabetic retinopathy using curvelet transform and hierarchical classification

    No full text
    In this paper, an automated system for grading the severity level of Diabetic Retinopathy (DR) disease based on fundus images is presented. Features are extracted using fast discrete curvelet transform. These features are applied to hierarchical support vector machine (SVM) classifier to obtain four types of grading levels, namely, normal, mild, moderate and severe. These grading levels are determined based on the number of anomalies such as microaneurysms, hard exudates and haemorrhages that are present in the fundus image. The performance of the proposed system is evaluated using fundus images from the Messidor database. Experiment results show that the proposed system can achieve an accuracy rate of 86.23%

    Prediction-Based Lossless Image Compression

    No full text
    In this paper, a lossless image compression technique using prediction errors is proposed. To achieve better compression performance, a novel classifier which makes use of wavelet and Fourier descriptor features is employed. Artificial neural network (ANN) is used as a predictor. An optimum ANN configuration is determined for each class of the images. In the second stage, an entropy encoding is performed on the prediction errors which improve the compression performance further. The prediction process is made lossless by making the predicted values as integers both at the compression and decompression stages. The proposed method is tested using three types of datasets, namely CLEF med 2009, COREL1 k and standard benchmarking images. It is found that the proposed method yields good compression ratio values in all these cases and for standard images, the compression ratio values achieved are higher compared to those obtained by the known algorithms

    Lossy image compression based on prediction error and vector quantisation

    No full text
    Abstract Lossy image compression has been gaining importance in recent years due to the enormous increase in the volume of image data employed for Internet and other applications. In a lossy compression, it is essential to ensure that the compression process does not affect the quality of the image adversely. The performance of a lossy compression algorithm is evaluated based on two conflicting parameters, namely, compression ratio and image quality which is usually measured by PSNR values. In this paper, a new lossy compression method denoted as PE-VQ method is proposed which employs prediction error and vector quantization (VQ) concepts. An optimum codebook is generated by using a combination of two algorithms, namely, artificial bee colony and genetic algorithms. The performance of the proposed PE-VQ method is evaluated in terms of compression ratio (CR) and PSNR values using three different types of databases, namely, CLEF med 2009, Corel 1 k and standard images (Lena, Barbara etc.). Experiments are conducted for different codebook sizes and for different CR values. The results show that for a given CR, the proposed PE-VQ technique yields higher PSNR value compared to the existing algorithms. It is also shown that higher PSNR values can be obtained by applying VQ on prediction errors rather than on the original image pixels
    corecore