275 research outputs found

    Combined Industry, Space and Earth Science Data Compression Workshop

    Get PDF
    The sixth annual Space and Earth Science Data Compression Workshop and the third annual Data Compression Industry Workshop were held as a single combined workshop. The workshop was held April 4, 1996 in Snowbird, Utah in conjunction with the 1996 IEEE Data Compression Conference, which was held at the same location March 31 - April 3, 1996. The Space and Earth Science Data Compression sessions seek to explore opportunities for data compression to enhance the collection, analysis, and retrieval of space and earth science data. Of particular interest is data compression research that is integrated into, or has the potential to be integrated into, a particular space or earth science data information system. Preference is given to data compression research that takes into account the scien- tist's data requirements, and the constraints imposed by the data collection, transmission, distribution and archival systems

    PATTERN-BASED ERROR RECOVERY OF LOW RESOLUTION SUBBANDS IN JPEG2000

    Get PDF
    ABSTRACT Digital image transmission is widely used in consumer products, such as digital cameras and cellular phones, where low bit rate coding is required. In any low bit rate encoder, such as the JPEG2000 standard, data truncation (during the encoding process), and data loss (during transmission) will result in lost bit-planes, which will be normally replaced by zeros. In this paper a new algorithm has been proposed, which recovers the lost/truncated lower bit-planes of coefficients in the LL subband of a wavelet transform in a JPEG2000 stream using the data available in higher bitplanes of the same coefficient and its eight neighbors. Simulation results indicate that the proposed algorithm achieves 5.40-8.77 dB improvement with respect to zero filling data recovery method

    Evaluate multiple description coding as an image processing method for transferring information in error-prone networks with low transmission rate, related to quality, bit rate and file size

    Get PDF
    Masteroppgave i informasjons- og kommunikasjonsteknologi 2005 - Høgskolen i Agder, GrimstadOver the past several years, image transmissions and other multimedia services have become increasingly popular. The problem is that many of these services are performed in so-called unreliable transmission systems, for example wireless or mobile systems, where there is a greater probability of packet loss, bit errors and/or interference when compared to wired transmission systems. In today’s transmission systems, retransmission is a commonly used technique for the correction of lost, or erroneous, data packets. However, this technique has several shortcomings since the retransmissions often lead to a considerable amount of extra network traffic, which in turn results in a reduced availability of bandwidth for each user, and, at worst, may result in network congestion. The time spent on data retransmissions may also be a problem in several applications. This thesis evaluates MD coding as an alternative solution to the abovementioned problems. MD coding is a technique to achieve robust communication over unreliable channels such as a lossy packet network. A source is split into two or more equally important descriptions in such a way that various reconstruction qualities are obtained from different subsets of the descriptions. In MD coding, statistical redundancy is added to a data signal in such a way that data packets, which are lost or exposed to errors during transmission, can be estimated from all, or some of, the successfully received data packets. This evaluation is accomplished by development and implementation of several SD and MD coding algorithms, where the main objective is to prove that the MD coding algorithms may considerably reduce the need for retransmission in unreliable transmission systems. Since the available bandwidth in such systems is often limited, compression of image data is also an important part of this thesis. The developed coding algorithms and compression procedures are implemented through a test application, which simulates image network transmissions with user-defined parameters. Finally, the efficiency of each developed MD coding algorithm is proved by comparing them with the SD coding systems, the so-called baseline systems. Through this thesis project, it has been discovered, by means of qualitative and quantitative tests, that the MD coding systems outperform the SD coding systems when packet loss occur during image transmission, while matching the SD coding systems when all the image data are successfully transmitted. This is done by adding a certain amount of redundancy to the MD data signals

    Graph Spectral Image Processing

    Full text link
    Recent advent of graph signal processing (GSP) has spurred intensive studies of signals that live naturally on irregular data kernels described by graphs (e.g., social networks, wireless sensor networks). Though a digital image contains pixels that reside on a regularly sampled 2D grid, if one can design an appropriate underlying graph connecting pixels with weights that reflect the image structure, then one can interpret the image (or image patch) as a signal on a graph, and apply GSP tools for processing and analysis of the signal in graph spectral domain. In this article, we overview recent graph spectral techniques in GSP specifically for image / video processing. The topics covered include image compression, image restoration, image filtering and image segmentation

    Digital watermarking : applicability for developing trust in medical imaging workflows state of the art review

    Get PDF
    Medical images can be intentionally or unintentionally manipulated both within the secure medical system environment and outside, as images are viewed, extracted and transmitted. Many organisations have invested heavily in Picture Archiving and Communication Systems (PACS), which are intended to facilitate data security. However, it is common for images, and records, to be extracted from these for a wide range of accepted practices, such as external second opinion, transmission to another care provider, patient data request, etc. Therefore, confirming trust within medical imaging workflows has become essential. Digital watermarking has been recognised as a promising approach for ensuring the authenticity and integrity of medical images. Authenticity refers to the ability to identify the information origin and prove that the data relates to the right patient. Integrity means the capacity to ensure that the information has not been altered without authorisation. This paper presents a survey of medical images watermarking and offers an evident scene for concerned researchers by analysing the robustness and limitations of various existing approaches. This includes studying the security levels of medical images within PACS system, clarifying the requirements of medical images watermarking and defining the purposes of watermarking approaches when applied to medical images

    On the design of fast and efficient wavelet image coders with reduced memory usage

    Full text link
    Image compression is of great importance in multimedia systems and applications because it drastically reduces bandwidth requirements for transmission and memory requirements for storage. Although earlier standards for image compression were based on the Discrete Cosine Transform (DCT), a recently developed mathematical technique, called Discrete Wavelet Transform (DWT), has been found to be more efficient for image coding. Despite improvements in compression efficiency, wavelet image coders significantly increase memory usage and complexity when compared with DCT-based coders. A major reason for the high memory requirements is that the usual algorithm to compute the wavelet transform requires the entire image to be in memory. Although some proposals reduce the memory usage, they present problems that hinder their implementation. In addition, some wavelet image coders, like SPIHT (which has become a benchmark for wavelet coding), always need to hold the entire image in memory. Regarding the complexity of the coders, SPIHT can be considered quite complex because it performs bit-plane coding with multiple image scans. The wavelet-based JPEG 2000 standard is still more complex because it improves coding efficiency through time-consuming methods, such as an iterative optimization algorithm based on the Lagrange multiplier method, and high-order context modeling. In this thesis, we aim to reduce memory usage and complexity in wavelet-based image coding, while preserving compression efficiency. To this end, a run-length encoder and a tree-based wavelet encoder are proposed. In addition, a new algorithm to efficiently compute the wavelet transform is presented. This algorithm achieves low memory consumption using line-by-line processing, and it employs recursion to automatically place the order in which the wavelet transform is computed, solving some synchronization problems that have not been tackled by previous proposals. The proposed encodeOliver Gil, JS. (2006). On the design of fast and efficient wavelet image coders with reduced memory usage [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/1826Palanci

    Sparse image approximation with application to flexible image coding

    Get PDF
    Natural images are often modeled through piecewise-smooth regions. Region edges, which correspond to the contours of the objects, become, in this model, the main information of the signal. Contours have the property of being smooth functions along the direction of the edge, and irregularities on the perpendicular direction. Modeling edges with the minimum possible number of terms is of key importance for numerous applications, such as image coding, segmentation or denoising. Standard separable basis fail to provide sparse enough representation of contours, due to the fact that this kind of basis do not see the regularity of edges. In order to be able to detect this regularity, a new method based on (possibly redundant) sets of basis functions able to capture the geometry of images is needed. This thesis presents, in a first stage, a study about the features that basis functions should have in order to provide sparse representations of a piecewise-smooth image. This study emphasizes the need for edge-adapted basis functions, capable to accurately capture local orientation and anisotropic scaling of image structures. The need of different anisotropy degrees and orientations in the basis function set leads to the use of redundant dictionaries. However, redundant dictionaries have the inconvenience of giving no unique sparse image decompositions, and from all the possible decompositions of a signal in a redundant dictionary, just the sparsest is needed. There are several algorithms that allow to find sparse decompositions over redundant dictionaries, but most of these algorithms do not always guarantee that the optimal approximation has been recovered. To cope with this problem, a mathematical study about the properties of sparse approximations is performed. From this, a test to check whether a given sparse approximation is the sparsest is provided. The second part of this thesis presents a novel image approximation scheme, based on the use of a redundant dictionary. This scheme allows to have a good approximation of an image with a number of terms much smaller than the dimension of the signal. This novel approximation scheme is based on a dictionary formed by a combination of anisotropically refined and rotated wavelet-like mother functions and Gaussians. An efficient Full Search Matching Pursuit algorithm to perform the image decomposition in such a dictionary is designed. Finally, a geometric image coding scheme based on the image approximated over the anisotropic and rotated dictionary of basis functions is designed. The coding performances of this dictionary are studied. Coefficient quantization appears to be of crucial importance in the design of a Matching Pursuit based coding scheme. Thus, a quantization scheme for the MP coefficients has been designed, based on the theoretical energy upper bound of the MP algorithm and the empirical observations of the coefficient distribution and evolution. Thanks to this quantization, our image coder provides low to medium bit-rate image approximations, while it allows for on the fly resolution switching and several other affine image transformations to be performed directly in the transformed domain

    Signal processing for high-definition television

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mathematics, 1995.Includes bibliographical references (p. 60-62).by Peter Monta.Ph.D
    • …
    corecore