884 research outputs found
A SURVEY ON PARALLEL COMPUTING OF IMAGE COMPRESSION ALGORITHMS JPEG and Fractal Image Compression
This paper presents a short survey on the parallel computing of JPEG and Fractal image compression algorithms. Image compression is a type of data compression. Data Compression generally involves encoding techniques that uses fewer bits than the original representation. Image compression uses various techniques that will remove the redundant and the irrelevant information from the image. Image compression can thus efficiently reduce the storage space required and also speed up the transmission. However, most of the image compression techniques have problems like computational complexity, load etc. Parallel computing can effectively improve the processing speed. JPEG and fractal image compressions are two of the efficient techniques available in image compression. With the availability of the high performance computing in the form of multicore processing systems and GPUs can greatly accelerate the processing of the JPEG image compression technique. Fractal image compression takes advantage of the natural affine redundancy present in the typical images to achieve a high compression ratio. To speed up the compression process the sequential fractal image compression algorithm needs to be converted into parallel fractal image compression algorithm, this translation exploits the inherently parallel nature
A survey of parallel algorithms for fractal image compression
This paper presents a short survey of the key research work that has been undertaken in the application of parallel algorithms for Fractal image compression. The interest in fractal image compression techniques stems from their ability to achieve high compression ratios whilst maintaining a very high quality in the reconstructed image. The main drawback of this compression method is the very high computational cost that is associated with the encoding phase. Consequently, there has been significant interest in exploiting parallel computing architectures in order to speed up this phase, whilst still maintaining the advantageous features of the approach. This paper presents a brief introduction to fractal image compression, including the iterated function system theory upon
which it is based, and then reviews the different techniques that have been, and can be, applied in order to parallelize the compression algorithm
The effects of image compression on quantitative measurements of digital panoramic radiographs
Objectives: The aims of this study were to explore how image compression affects density, fractal dimension,
linear and angular measurements on digital panoramic images and assess inter and intra-observer repeatability of
these measurements.
Study Design: Sixty-one digital panoramic images in TIFF format (Tagged Image File Format) were compressed
to JPEG (Joint Photographic Experts Group) images. Two observers measured gonial angle, antegonial angle,
mandibular cortical width, coronal pulp width of maxillary and mandibular first molar, tooth length of maxillary
and mandibular first molar on the left side of these images twice. Fractal dimension of the selected regions of interests were calculated and the density of each panoramic radiograph as a whole were also measured on TIFF and
JPEG compressed images. Intra-observer and inter-observer consistency was evaluated with Cronbach's alpha.
Paired samples t-test and Kolmogorov-Smirnov test was used to evaluate the difference between the measurements of TIFF and JPEG compressed images.
Results: The repeatability of angular measurements had the highest Cronbach's alpha value (0.997). There was
statistically significant difference for both of the observers in mandibular cortical width (MCW) measurements
(1st ob. p: 0.002; 2nd ob. p: 0.003), density (p<0.001) and fractal dimension (p<0.001) between TIFF and JPEG
images. There was statistically significant difference for the first observer in antegonial angle (1st ob p< 0.001) and
maxillary molar coronal pulp width (1st ob. p< 0.001) between JPEG and TIFF files.
Conclusions: The repeatability of angular measurements is better than linear measurements. Mandibular cortical
width, fractal dimension and density are affected from compression. Observer dependent factors might also cause
statistically significant differences between the measurements in TIFF and JPEG images
DCT Implementation on GPU
There has been a great progress in the field of graphics processors. Since, there is no rise in the speed of the normal CPU processors; Designers are coming up with multi-core, parallel processors. Because of their popularity in parallel processing, GPUs are becoming more and more attractive for many applications. With the increasing demand in utilizing GPUs, there is a great need to develop operating systems that handle the GPU to full capacity. GPUs offer a very efficient environment for many image processing applications. This thesis explores the processing power of GPUs for digital image compression using Discrete cosine transform
Distributed video through telecommunication networks using fractal image compression techniques
The research presented in this thesis investigates the use of fractal compression techniques for a real time video distribution system. The motivation for this work was that the method has some useful properties which satisfy many requirements for video compression. In addition, as a novel technique, the fractal compression method has a great potential. In this thesis, we initially develop an understanding of the state of the art in image and video compression and describe the mathematical concepts and basic terminology of the fractal compression algorithm. Several schemes which aim to the improve of the algorithm, for still images are then examined. Amongst these, two novel contributions are described. The first is the partitioning of the image into sections which resulted insignificant reduction of the compression time. In the second, the use of the median metric as alternative to the RMS was considered but was not finally adopted, since the RMS proved to be a more efficient measure. The extension of the fractal compression algorithm from still images to image sequences is then examined and three different schemes to reduce the temporal redundancy of the video compression algorithm are described. The reduction in the execution time of the compression algorithm that can be obtained by the techniques described is significant although real time execution has not yet been achieved. Finally, the basic concepts of distributed programming and networks, as basic elements of a video distribution system, are presented and the hardware and software components of a fractal video distribution system are described. The implementation of the fractal compression algorithm on a TMS320C40 is also considered for speed benefits and it is found that a relatively large number of processors are needed for real time execution
Semantic Perceptual Image Compression using Deep Convolution Networks
It has long been considered a significant problem to improve the visual
quality of lossy image and video compression. Recent advances in computing
power together with the availability of large training data sets has increased
interest in the application of deep learning cnns to address image recognition
and image processing tasks. Here, we present a powerful cnn tailored to the
specific task of semantic image understanding to achieve higher visual quality
in lossy compression. A modest increase in complexity is incorporated to the
encoder which allows a standard, off-the-shelf jpeg decoder to be used. While
jpeg encoding may be optimized for generic images, the process is ultimately
unaware of the specific content of the image to be compressed. Our technique
makes jpeg content-aware by designing and training a model to identify multiple
semantic regions in a given image. Unlike object detection techniques, our
model does not require labeling of object positions and is able to identify
objects in a single pass. We present a new cnn architecture directed
specifically to image compression, which generates a map that highlights
semantically-salient regions so that they can be encoded at higher quality as
compared to background regions. By adding a complete set of features for every
class, and then taking a threshold over the sum of all feature activations, we
generate a map that highlights semantically-salient regions so that they can be
encoded at a better quality compared to background regions. Experiments are
presented on the Kodak PhotoCD dataset and the MIT Saliency Benchmark dataset,
in which our algorithm achieves higher visual quality for the same compressed
size.Comment: Accepted to Data Compression Conference, 11 pages, 5 figure
- …