141 research outputs found

    Significant medical image compression techniques: a review

    Get PDF
    Telemedicine applications allow the patient and doctor to communicate with each other through network services. Several medical image compression techniques have been suggested by researchers in the past years. This review paper offers a comparison of the algorithms and the performance by analysing three factors that influence the choice of compression algorithm, which are image quality, compression ratio, and compression speed. The results of previous research have shown that there is a need for effective algorithms for medical imaging without data loss, which is why the lossless compression process is used to compress medical records. Lossless compression, however, has minimal compression ratio efficiency. The way to get the optimum compression ratio is by segmentation of the image into region of interest (ROI) and non-ROI zones, where the power and time needed can be minimised due to the smaller scale. Recently, several researchers have been attempting to create hybrid compression algorithms by integrating different compression techniques to increase the efficiency of compression algorithms

    Optimization of image coding algorithms and architectures using genetic algorithms

    Get PDF

    Genetic algorithm and tabu search approaches to quantization for DCT-based image compression

    Get PDF
    Today there are several formal and experimental methods for image compression, some of which have grown to be incorporated into the Joint Photographers Experts Group (JPEG) standard. Of course, many compression algorithms are still used only for experimentation mainly due to various performance issues. Lack of speed while compressing or expanding an image, poor compression rate, and poor image quality after expansion are a few of the most popular reasons for skepticism about a particular compression algorithm. This paper discusses current methods used for image compression. It also gives a detailed explanation of the discrete cosine transform (DCT), used by JPEG, and the efforts that have recently been made to optimize related algorithms. Some interesting articles regarding possible compression enhancements will be noted, and in association with these methods a new implementation of a JPEG-like image coding algorithm will be outlined. This new technique involves adapting between one and sixteen quantization tables for a specific image using either a genetic algorithm (GA) or tabu search (TS) approach. First, a few schemes including pixel neighborhood and Kohonen self-organizing map (SOM) algorithms will be examined to find their effectiveness at classifying blocks of edge-detected image data. Next, the GA and TS algorithms will be tested to determine their effectiveness at finding the optimum quantization table(s) for a whole image. A comparison of the techniques utilized will be thoroughly explored

    Image compression techniques using vector quantization

    Get PDF

    Development of Some Efficient Lossless and Lossy Hybrid Image Compression Schemes

    Get PDF
    Digital imaging generates a large amount of data which needs to be compressed, without loss of relevant information, to economize storage space and allow speedy data transfer. Though both storage and transmission medium capacities have been continuously increasing over the last two decades, they dont match the present requirement. Many lossless and lossy image compression schemes exist for compression of images in space domain and transform domain. Employing more than one traditional image compression algorithms results in hybrid image compression techniques. Based on the existing schemes, novel hybrid image compression schemes are developed in this doctoral research work, to compress the images effectually maintaining the quality

    Towards Optimal Copyright Protection Using Neural Networks Based Digital Image Watermarking

    Get PDF
    In the field of digital watermarking, digital image watermarking for copyright protection has attracted a lot of attention in the research community. Digital watermarking contains varies techniques for protecting the digital content. Among all those techniques,Discrete Wavelet Transform (DWT) provides higher image imperceptibility and robustness. Over the years, researchers have been designing watermarking techniques with robustness in mind, in order for the watermark to be resistant against any image processing techniques. Furthermore, the requirements of a good watermarking technique includes a tradeoff between robustness, image quality (imperceptibility) and capacity. In this paper, we have done an extensive literature review for the existing DWT techniques and those combined with other techniques such as Neural Networks. In addition to that, we have discuss the contribution of Neural Networks in copyright protection. Finally we reached our goal in which we identified the research gaps existed in the current watermarking schemes. So that, it will be easily to obtain an optimal techniques to make the watermark object robust to attacks while maintaining the imperceptibility to enhance the copyright protection

    An enhanced method based on intermediate significant bit technique for watermark images

    Get PDF
    Intermediate Significant Bit digital watermarking technique (ISB) is a new approved technique of embedding a watermark by replacing the original image pixels with new pixels. This is done by ensuring a close connection between the new pixels and the original, and at the same time, the watermark data can be protected against possible damage. One of the most popular methods used in watermarking is the Least Significant Bit (LSB). It uses a spatial domain that includes the insertion of the watermark in the LSB of the image. The problem with this method is it is not resilient to common damage, and there is the possibility of image distortion after embedding a watermark. LSB may be used through replacing one bit, two bits, or three bits; this is done by changing the specific bits without any change in the other bits in the pixel. The objective of this thesis is to formulate new algorithms for digital image watermarking with enhanced image quality and robustness by embedding two bits of watermark data into each pixel of the original image based on ISB technique. However, to understand the opposite relationship between the image quality and robustness, a tradeoff between them has been done to create a balance and to acquire the best position for the two embedding bits. Dual Intermediate Significant Bits (DISB) technique has been proposed to solve the existing LSB problem. Trial results obtained from this technique are better compared with the LSB based on the Peak Signal to Noise Ratio (PSNR) and Normalized Cross Correlation (NCC). The work in this study also contributes new mathematical equations that can study the change on the other six bits in the pixel after embedding two bits

    A VHDL design for hardware assistance of fractal image compression

    Get PDF
    Fractal image compression schemes have several unusual and useful attributes, including resolution independence, high compression ratios, good image quality, and rapid decompression. Despite this, one major difficulty has prevented their widespread adoption: the extremely high computational complexity of compression. Fractal image compression algorithms represent an image as a series of contractive transformations, each of which maps a large domain block to a smaller range block. Given only this set of transformations, it is possible to reconstruct an approximation of the original image by iteratively applying the transformations to an arbitrary image. Compression consists of partitioning the image into range blocks and finding a suitable transformation of a domain block to represent each one. This search for transformations must generally be done using a brute force approach, comparing successive domain blocks until a suitable match is found. Some algorithmic improvements have been found, but none are adequate to reduce the required compression time to something reasonable for many uses. This thesis presents a new ASIC design which performs a large number of the required comparisons in parallel, yielding a substantial speedup over a program on a general-purpose computer system. This ASIC is designed in VHDL, which may be synthesized to many different target architectures. The design has considerable flexibility which makes it applicable to different images and applications. The design is based around a pipeline of units that each compare one range block with a series of domain blocks which are fed through the pipeline. Comparisons are made to minimize the mean square error (MSE) of a transform given a linear mapping of the intensity values. This is, by far, the most common minimization strategy used in the literature. The speedup provided by this design is estimated to be about 1,000 times for 256 x 256 images divided into 8x8 blocks over a sequential processor given similar implementation technologies

    Investigation of Different Video Compression Schemes Using Neural Networks

    Get PDF
    Image/Video compression has great significance in the communication of motion pictures and still images. The need for compression has resulted in the development of various techniques including transform coding, vector quantization and neural networks. this thesis neural network based methods are investigated to achieve good compression ratios while maintaining the image quality. Parts of this investigation include motion detection, and weight retraining. An adaptive technique is employed to improve the video frame quality for a given compression ratio by frequently updating the weights obtained from training. More specifically, weight retraining is performed only when the error exceeds a given threshold value. Image quality is measured objectively, using the peak signal-to-noise ratio versus performance measure. Results show the improved performance of the proposed architecture compared to existing approaches. The proposed method is implemented in MATLAB and the results obtained such as compression ratio versus signalto- noise ratio are presented
    corecore