32 research outputs found

    Near-Lossless Bitonal Image Compression System

    Get PDF
    The main purpose of this thesis is to develop an efficient near-lossless bitonal compression algorithm and to implement that algorithm on a hardware platform. The current methods for compression of bitonal images include the JBIG and JBIG2 algorithms, however both JBIG and JBIG2 have their disadvantages. Both of these algorithms are covered by patents filed by IBM, making them costly to implement commercially. Also, JBIG only provides means for lossless compression while JBIG2 provides lossy methods only for document-type images. For these reasons a new method for introducing loss and controlling this loss to sustain quality is developed. The lossless bitonal image compression algorithm used for this thesis is called Block Arithmetic Coder for Image Compression (BACIC), which can efficiently compress bitonal images. In this thesis, loss is introduced for cases where better compression efficiency is needed. However, introducing loss in bitonal images is especially difficult, because pixels undergo such a drastic change, either from white to black or black to white. Such pixel flipping introduces salt and pepper noise, which can be very distracting when viewing an image. Two methods are used in combination to control the visual distortion introduced into the image. The first is to keep track of the error created by the flipping of pixels, and using this error to decide whether flipping another pixel will cause the visual distortion to exceed a predefined threshold. The second method is region of interest consideration. In this method, lower loss or no loss is introduced into the important parts of an image, and higher loss is introduced into the less important parts. This allows for a good quality image while increasing the compression efficiency. Also, the ability of BACIC to compress grayscale images is studied and BACICm, a multiplanar BACIC algorithm, is created. A hardware implementation of the BACIC lossless bitonal image compression algorithm is also designed. The hardware implementation is done using VHDL targeting a Xilinx FPGA, which is very useful, because of its flexibility. The programmed FPGA could be included in a product of the facsimile or printing industry to handle the compression or decompression internal to the unit, giving it an advantage in the marketplace

    EFFICIENT IMAGE COMPRESSION AND DECOMPRESSION ALGORITHMS FOR OCR SYSTEMS

    Get PDF
    This paper presents an efficient new image compression and decompression methods for document images, intended for usage in the pre-processing stage of an OCR system designed for needs of the “Nikola Tesla Museum” in Belgrade. Proposed image compression methods exploit the Run-Length Encoding (RLE) algorithm and an algorithm based on document character contour extraction, while an iterative scanline fill algorithm is used for image decompression. Image compression and decompression methods are compared with JBIG2 and JPEG2000 image compression standards. Segmentation accuracy results for ground-truth documents are obtained in order to evaluate the proposed methods. Results show that the proposed methods outperform JBIG2 compression regarding the time complexity, providing up to 25 times lower processing time at the expense of worse compression ratio results, as well as JPEG2000 image compression standard, providing up to 4-fold improvement in compression ratio. Finally, time complexity results show that the presented methods are sufficiently fast for a real time character segmentation system

    Lossless Text Image Compression using Two Dimensional Run Length Encoding

    Get PDF
    Text images are used in many types of conventional data communication where texts are not directly represented by digital character such as ASCII but represented by an image, for instance facsimile file or scanned documents. We propose a combination of Run Length Encoding (RLE) and Huffman coding for two dimensional binary image compression namely 2DRLE. Firstly, each row in an image is read sequentially. Each consecutive recurring row is kept once and the number of occurrences is stored. Secondly, the same procedure is performed column-wise to the image produced by the first stage to obtain an image without consecutive recurring row and column. The image from the last stage is then compressed using Huffman coding. The experiment shows that the 2DRLE achieves a higher compression ratio than conventional Huffman coding for image by achieving more than 8:1 of compression ratio without any distortion

    Selection of bilevel image compression methods for reduction of communication energy in wireless vision sensor networks

    Get PDF
    ABSTRACT Wireless Visual Sensor Network (WVSN) is an emerging field which combines image sensor, on board computation unit, communication component and energy source. Compared to the traditional wireless sensor network, which operates on one dimensional data, such as temperature, pressure values etc., WVSN operates on two dimensional data (images) which requires higher processing power and communication bandwidth. Normally, WVSNs are deployed in areas where installation of wired solutions is not feasible. The energy budget in these networks is limited to the batteries, because of the wireless nature of the application. Due to the limited availability of energy, the processing at Visual Sensor Nodes (VSN) and communication from VSN to server should consume as low energy as possible. Transmission of raw images wirelessly consumes a lot of energy and requires higher communication bandwidth. Data compression methods reduce data efficiently and hence will be effective in reducing communication cost in WVSN. In this paper, we have compared the compression efficiency and complexity of six well known bi-level image compression methods. The focus is to determine the compression algorithms which can efficiently compress bi-level images and their computational complexity is suitable for computational platform used in WVSNs. These results can be used as a road map for selection of compression methods for different sets of constraints in WVSN

    Multum in parvo: Toward a generic compression method for binary images.

    Get PDF
    Data compression is an active field of research as the requirements to efficiently store and retrieve data at minimum time and cost persist to date. Lossless or lossy compression of bi-level data, such as binary images, has an equally crucial factor of importance. In this work, we explore a generic, application-independent method for lossless binary image compression. The first component of the proposed algorithm is a predetermined fixed-size codebook comprising 8 x 8-bit blocks of binary images along with the corresponding codes of shorter lengths. The two variations of the codebook--Huffman codes and Arithmetic codes--have yielded considerable compression ratios for various binary images. In order to attain higher compression, we introduce a second component--the row-column reduction coding--which removes additional redundancy. The proposed method is tested on two major areas involving bi-level data. The first area of application consists of binary images. Empirical results suggest that our algorithm outperforms the standard JBIG2 by at least 5% on average. The second area involves images consisting of a predetermined number of discrete colors, such as digital maps and graphs. By separating such images into binary layers, we employed our algorithm and attained efficient compression down to 0.035 bits per pixel. --P.ii.The original print copy of this thesis may be available here: http://wizard.unbc.ca/record=b173649

    CONTENT BASED INFORMATION RETRIEVAL FOR DIGITAL LIBRARY USING DOCUMENT IMAGE

    Get PDF
    In the recent year, the using of mobile devices has perceive an emerging need for improving the user experience of digital library for search, with various applications such as education, location search and product retrieval, There simply compare the query to the databases images; those are match that images are retrieve from the database, searching and response time of delivery staying a challenging issues in mobile document search previously lots of work has been done on search engine, retrieving the document from the database without analyzed the image. In The proposed method, Information retrieval for image based query automatically with a mobile document information retrieval framework, consisting of a FP-growth is proposed finding frequent pattern from the retrieve document to optimize the result

    Challenges and solutions in H.265/HEVC for integrating consumer electronics in professional video systems

    Get PDF
    corecore