199 research outputs found

    A lossless compression scheme for Bayer color filter array images

    Get PDF
    Centre for Multimedia Signal Processing, Department of Electronic and Information Engineering2007-2008 > Academic research: refereed > Publication in refereed journalVersion of RecordPublishe

    A JPEG-Like Algorithm for Compression of Single-Sensor Camera Image

    No full text
    International audienceThis paper presents a JPEG-like coder for image compression of single-sensor camera images using a Bayer Color Filter Array (CFA). The originality of the method is a joint scheme of compression.demosaicking in the DCT domain. In this method, the captured CFA raw data is first separated in four distinct components and then converted to YCbCr. A JPEG compression scheme is then applied. At the decoding level, the bitstream is decompressed until reaching the DCT coefficients. These latter are used for the interpolation stage. The obtained results are better than those obtained by the conventional JPEG in terms of CPSNR, DeltaE2000 and SSIM. The obtained JPEG-like scheme is also less complex

    A low complexity image compression algorithm for Bayer color filter array

    Get PDF
    Digital image in their raw form requires an excessive amount of storage capacity. Image compression is a process of reducing the cost of storage and transmission of image data. The compression algorithm reduces the file size so that it requires less storage or transmission bandwidth. This work presents a new color transformation and compression algorithm for the Bayer color filter array (CFA) images. In a full color image, each pixel contains R, G, and B components. A CFA image contains single channel information in each pixel position, demosaicking is required to construct a full color image. For each pixel, demosaicking constructs the missing two-color information by using information from neighbouring pixels. After demosaicking, each pixel contains R, G, and B information, and a full color image is constructed. Conventional CFA compression occurs after the demosaicking. However, the Bayer CFA image can be compressed before demosaicking which is called compression-first method, and the algorithm proposed in this research follows the compression-first or direct compression method. The compression-first method applies the compression algorithm directly onto the CFA data and shifts demosaicking to the other end of the transmission and storage process. The advantage of the compression-first method is that it requires three time less transmission bandwidth for each pixel than conventional compression. Compression-first method of CFA data produces spatial redundancy, artifacts, and false high frequencies. The process requires a color transformation with less correlation among the color components than that Bayer RGB color space. This work analyzes correlation coefficient, standard deviation, entropy, and intensity range of the Bayer RGB color components. The analysis provides two efficient color transformations in terms of features of color transformation. The proposed color components show lesser correlation coefficient than occurs with the Bayer RGB color components. Color transformations reduce both the spatial and spectral redundancies of the Bayer CFA image. After color transformation, the components are independently encoded using differential pulse-code modulation (DPCM) in raster order fashion. The residue error of DPCM is mapped to a positive integer for the adaptive Golomb rice code. The compression algorithm includes both the adaptive Golomb rice and Unary coding to generate bit stream. Extensive simulation analysis is performed on both simulated CFA and real CFA datasets. This analysis is extended for the WCE (wireless capsule endoscopic) images. The compression algorithm is also realized with a simulated WCE CFA dataset. The results show that the proposed algorithm requires less bits per pixel than the conventional CFA compression. The algorithm also outperforms recent works on CFA compression algorithms for both real and simulated CFA datasets

    A high performance lossless Bayer image compression scheme

    Get PDF
    2007-2008 > Academic research: refereed > Refereed conference paperVersion of RecordPublishe

    Efficient Encoding of Wireless Capsule Endoscopy Images Using Direct Compression of Colour Filter Array Images

    Get PDF
    Since its invention in 2001, wireless capsule endoscopy (WCE) has played an important role in the endoscopic examination of the gastrointestinal tract. During this period, WCE has undergone tremendous advances in technology, making it the first-line modality for diseases from bleeding to cancer in the small-bowel. Current research efforts are focused on evolving WCE to include functionality such as drug delivery, biopsy, and active locomotion. For the integration of these functionalities into WCE, two critical prerequisites are the image quality enhancement and the power consumption reduction. An efficient image compression solution is required to retain the highest image quality while reducing the transmission power. The issue is more challenging due to the fact that image sensors in WCE capture images in Bayer Colour filter array (CFA) format. Therefore, standard compression engines provide inferior compression performance. The focus of this thesis is to design an optimized image compression pipeline to encode the capsule endoscopic (CE) image efficiently in CFA format. To this end, this thesis proposes two image compression schemes. First, a lossless image compression algorithm is proposed consisting of an optimum reversible colour transformation, a low complexity prediction model, a corner clipping mechanism and a single context adaptive Golomb-Rice entropy encoder. The derivation of colour transformation that provides the best performance for a given prediction model is considered as an optimization problem. The low complexity prediction model works in raster order fashion and requires no buffer memory. The application of colour transformation yields lower inter-colour correlation and allows the efficient independent encoding of the colour components. The second compression scheme in this thesis is a lossy compression algorithm with a integer discrete cosine transformation at its core. Using the statistics obtained from a large dataset of CE image, an optimum colour transformation is derived using the principal component analysis (PCA). The transformed coefficients are quantized using optimized quantization table, which was designed with a focus to discard medically irrelevant information. A fast demosaicking algorithm is developed to reconstruct the colour image from the lossy CFA image in the decoder. Extensive experiments and comparisons with state-of-the-art lossless image compression methods establish the superiority of the proposed compression methods as simple and efficient image compression algorithm. The lossless algorithm can transmit the image in a lossless manner within the available bandwidth. On the other hand, performance evaluation of lossy compression algorithm indicates that it can deliver high quality images at low transmission power and low computation costs

    Lossless compression of color filter array mosaic images with visualization via JPEG 2000

    Get PDF
    Digital cameras have become ubiquitous for amateur and professional applications. The raw images captured by digital sensors typically take the form of color filter array (CFA) mosaic images, which must be "developed" (via digital signal processing) before they can be viewed. Photographers and scientists often repeat the "development process" using different parameters to obtain images suitable for different purposes. Since the development process is generally not invertible, it is commonly desirable to store the raw (or undeveloped) mosaic images indefinitely. Uncompressed mosaic image file sizes can be more than 30 times larger than those of developed images stored in JPEG format. Thus, data compression is of interest. Several compression methods for mosaic images have been proposed in the literature. However, they all require a custom decompressor followed by development-specific software to generate a displayable image. In this paper, a novel compression pipeline that removes these requirements is proposed. Specifically, mosaic images can be losslessly recovered from the resulting compressed files, and, more significantly, images can be directly viewed (decompressed and developed) using only a JPEG 2000 compliant image viewer. Experiments reveal that the proposed pipeline attains excellent visual quality, while providing compression performance competitive to that of state-of-the-art compression algorithms for mosaic images

    Diseño, implementación y optimización del sistema de compresión de imágenes sobre el ordenador de a bordo del proyecto de nanosátelite Eye-Sat

    Get PDF
    Eye-Sat es un Proyecto de nano satélites, dirigido por el CNES (Centre National d’Etudes Spatiales) y desarrollado principalmente por estudiantes de varias escuelas de ingeniería del territorio francés. El objetivo de este pequeño telescopio no solo radica en la oportunidad de realizar la demostración de distintos dispositivos tecnológicos, sino que también tiene como misión la adquisición de fotografías en la bandas de color e infrarrojo de la vía Láctea, así como el estudio de la intensidad y polarización de la luz Zodiacal. Los requerimientos de la misión exigen el desarrollo de un algoritmo de compresión de imágenes sin pérdidas para las imágenes “Color Filter Array” CFA (Bayer) e infrarrojas adquiridas por el satélite. Como miembro de la comisión consultativa para los sistemas espaciales, CNES ha seleccionado el estándar CCSDS-123.0-B como algoritmo base para cumplir los requerimientos de la misión. A este algoritmo se le añadirán modificaciones o mejoras, adaptadas a las imágenes tipo, con el fin de mejorar las prestaciones de compresión y de complejidad. La implementación y la optimización del algoritmo será desarrollada sobre la plataforma Xilinx Zynq® All Programmable SoC, el cual incluye una FPGA y un Dual-core ARM® Cortex™-A9 processor with NEONTM DSP/FPU Engine

    Recording, compression and representation of dense light fields

    Get PDF
    The concept of light fields allows image based capture of scenes, providing, on a recorded dataset, many of the features available in computer graphics, like simulation of different viewpoints, or change of core camera parameters, including depth of field. Due to the increase in the recorded dimension from two for a regular image to four for a light field recording, previous works mainly concentrate on small or undersampled light field recordings. This thesis is concerned with the recording of a dense light field dataset, including the estimation of suitable sampling parameters, as well as the implementation of the required capture, storage and processing methods. Towards this goal, the influence of an optical system on the, possibly bandunlimited, light field signal is examined, deriving the required sampling rates from the bandlimiting effects of the camera and optics. To increase storage capacity and bandwidth a very fast image compression methods is introduced, providing an order of magnitude faster compression than previous methods, reducing the I/O bottleneck for light field processing. A fiducial marker system is provided for the calibration of the recorded dataset, which provides a higher number of reference points than previous methods, improving camera pose estimation. In conclusion this work demonstrates the feasibility of dense sampling of a large light field, and provides a dataset which may be used for evaluation or as a reference for light field processing tasks like interpolation, rendering and sampling.Das Konzept des Lichtfelds erlaubt eine bildbasierte Erfassung von Szenen und ermöglicht es, auf den erfassten Daten viele Effekte aus der Computergrafik zu berechnen, wie das Simulieren alternativer Kamerapositionen oder die Veränderung zentraler Parameter, wie zum Beispiel der Tiefenschärfe. Aufgrund der enorm vergrößerte Datenmenge die für eine Aufzeichnung benötigt wird, da Lichtfelder im Vergleich zu den zwei Dimensionen herkömmlicher Kameras über vier Dimensionen verfügen, haben frühere Arbeiten sich vor allem mit kleinen oder unterabgetasteten Lichtfeldaufnahmen beschäftigt. Diese Arbeit hat das Ziel eine dichte Aufnahme eines Lichtfeldes vorzunehmen. Dies beinhaltet die Berechnung adäquater Abtastparameter, sowie die Implementierung der benötigten Aufnahme-, Verarbeitungs- und Speicherprozesse. In diesem Zusammenhang werden die bandlimitierenden Effekte des optischen Aufnahmesystems auf das möglicherweise nicht bandlimiterte Signal des Lichtfeldes untersucht und die benötigten Abtastraten davon abgeleitet. Um die Bandbreite und Kapazität des Speichersystems zu erhöhen wird ein neues, extrem schnelles Verfahren der Bildkompression eingeführt, welches um eine Größenordnung schneller operiert als bisherige Methoden. Für die Kalibrierung der Kamerapositionen des aufgenommenen Datensatzes wird ein neues System von sich selbst identifizierenden Passmarken vorgestellt, welches im Vergleich zu früheren Methoden mehr Referenzpunkte auf gleichem Raum zu Verfügung stellen kann und so die Kamerakalibrierung verbessert. Kurz zusammengefasst demonstriert diese Arbeit die Durchführbarkeit der Aufnahme eines großen und dichten Lichtfeldes, und stellt einen entsprechenden Datensatz zu Verfügung. Der Datensatz ist geeignet als Referenz für die Untersuchung von Methoden zur Verarbeitung von Lichtfeldern, sowie für die Evaluation von Methoden zur Interpolation, zur Abtastung und zum Rendern
    corecore