4 research outputs found

    JPEG XR scalable coding for remote image browsing applications

    Get PDF
    The growing popularity of the Internet has opened the road to multimedia and interactivity, emphasizing the importance of visual communication. In this context, digital images have taken a lead role and have an increasing number of applications. Consider, for example, the spread that digital cameras and mobile devices such as mobile phones have become in recent years. Thus, it arises the need for a flexible system that can handle images from different sources and are able to adapt to a different view. The importance of this issue lies in the application scenario: today there are datastores with a large number of images saved in JPEG format and systems for rendering digital images are various and with very different characteristics with each other. The ISO/IEC committee has recently issued a new format, called JPEG-XR, created explicitly for the modern digital cameras. The new coding algorithm JPEG-XR, can overcome various limitations of the first JPEG algorithm and provides viable alternatives to the JPEG2000 algorithm. This research has primarily focused on issues concerning the scalability of the new format of digital images.Additional scalability levels are fundamental for image browsing applications, because enable the system to ensure a correct and efficient functioning even when there is a sharp increase in the number of resources and users.Scalability is mostly required when dealing with large image database on the Web in order to reduce the transferred data, especially when it comes to large images. The interactive browsing also requires the ability to access to arbitrary parts of the image. The starting point is the use of a client-server architecture, in which the server stores a database of JPEG XR images and analyzes requests from a client. Client and server communicate via HTTP and use an exchange protocol. In order to minimize the transferred information, the JPEG XR coded file format should make use of the frequency mode order and partitioning of images into optimized tiles. The main goal is transmitting only some subset of the available sub-band coefficients. This is necessary to allow access an interactive access to portion of images, that are downloaded and displayed, minimizing the amount of data transferred and maintaining an acceptable image quality.The proposed architecture has of course prompted a study of errors in transmission on unreliable channel, such as the wireless one, and the definition of possible optimizations/variants of the codec in order to overcome its own limitations. Image data compressed with JPEG XR when transmitted over error-prone channels is severely distorted. In fact, due to the adaptive coding strategies used by the codec, even a single bit error causes a mismatch in the alignment of the reading position from the bit-stream, leading to completely different images at the decoder side. An extension to the JPEG XR algorithm is proposed, consisting in an error recovery process enabling the decoder to realign itself to the right bit-stream position and to correctly decode the most part of the image. Several experiments have been performed using different encoder parameter and different error probabilities while image distortion is measured by PSNR objective metric. The simplicity of the proposed algorithm adds very little computational overhead and seems very promising as confirmed by objective image quality results in experimental tests

    An intelligent system for the classification and selection of novel and efficient lossless image compression algorithms

    Get PDF
    We are currently living in an era revolutionised by the development of smart phones and digital cameras. Most people are using phones and cameras in every aspect of their lives. With this development comes a high level of competition between the technology companies developing these devices, each one trying to enhance its products to meet the new market demands. One of the most sought-after criteria of any smart phone or digital camera is the camera’s resolution. Digital imaging and its applications are growing rapidly; as a result of this growth, the image size is increasing, and alongside this increase comes the important challenge of saving these large-sized images and transferring them over networks. With the increase in image size, the interest in image compression is increasing as well, to improve the storage size and transfer time. In this study, the researcher proposes two new lossless image compression algorithms. Both proposed algorithms focus on decreasing the image size by reducing the image bit-depth through using well defined methods of reducing the coloration between the image intensities.The first proposed lossless image compression algorithm is called Column Subtraction Compression (CSC), which aims to decrease the image size without losing any of the image information by using a colour transformation method as a pre-processing phase, followed by the proposed Column Subtraction Compression function to decrease the image size. The proposed algorithm is specially designed for compressing natural images. The CSC algorithm was evaluated for colour images and compared against benchmark schemes obtained from (Khan et al., 2017). It achieved the best compression size over the existing methods by enhancing the average storage saving of the BBWCA, JPEG 2000 LS, KMTF– BWCA, HEVC and basic BWCA algorithms by 2.5%, 15.6%, 41.6%, 7.8% and 45.07% respectively. The CSC algorithm simple implementation positively affects the execution time and makes it one of the fastest algorithms, since it needed less than 0.5 second for compressing and decompressing natural images obtained from (Khan et al., 2017). The proposed algorithm needs only 19.36 seconds for compressing and decompressing all of the 10 images from the Kodak image set, while the BWCA, KMTF – BWCA and BBWCA need 398.5s, 429.24s and 475.38s respectively. Nevertheless, the CSC algorithm achieved less compression ratio, when compressing low resolution images since it was designed for compressing high resolution images. To solve this issue, the researcher proposed the Low-Resolution Column Subtraction Compression algorithm (LRCSC) to enhance the CSC low compression ratio related to compressing low-resolution images.The LRCSC algorithm starts by using the CSC algorithm as a pre-processing phase, followed by the Huffman algorithm and Run-Length Coding (RLE) to decrease the image size as a final compression phase. The LRCSC enhanced the average storage saving of the CSC algorithm for raster map images by achieving 13.68% better compression size. The LRCSC algorithm decreases the raster map image set size by saving 96% from the original image set size but did not reach the best results when compared with the PNG, GIF, BLiSE and BBWCA where the storage saving is 97.42%, 98.33%, 98.92% and 98.93% respectively. The LRCSC algorithm enhanced the compression execution time with acceptable compression ratio. Both of the proposed algorithms are effective with any image types such as colour or greyscale images. The proposed algorithms save a lot of memory storage and dramatically decreased the execution time.Finally, to take full benefits of the two newly developed algorithms, anew system is developed based on running both of the algorithm for the same input image and then suggest the appropriate algorithm to be used for the de-compression phase
    corecore