188 research outputs found

    Combined Industry, Space and Earth Science Data Compression Workshop

    Get PDF
    The sixth annual Space and Earth Science Data Compression Workshop and the third annual Data Compression Industry Workshop were held as a single combined workshop. The workshop was held April 4, 1996 in Snowbird, Utah in conjunction with the 1996 IEEE Data Compression Conference, which was held at the same location March 31 - April 3, 1996. The Space and Earth Science Data Compression sessions seek to explore opportunities for data compression to enhance the collection, analysis, and retrieval of space and earth science data. Of particular interest is data compression research that is integrated into, or has the potential to be integrated into, a particular space or earth science data information system. Preference is given to data compression research that takes into account the scien- tist's data requirements, and the constraints imposed by the data collection, transmission, distribution and archival systems

    Fast Random Access to Wavelet Compressed Volumetric Data Using Hashing

    Get PDF
    We present a new approach to lossy storage of the coefficients of wavelet transformed data. While it is common to store the coefficients of largest magnitude (and let all other coefficients be zero), we allow a slightly different set of coefficients to be stored. This brings into play a recently proposed hashing technique that allows space efficient storage and very efficient retrieval of coefficients. Our approach is applied to compression of volumetric data sets. For the ``Visible Man'' volume we obtain up to 80% improvement in compression ratio over previously suggested schemes. Further, the time for accessing a random voxel is quite competitive

    Graph Spectral Image Processing

    Full text link
    Recent advent of graph signal processing (GSP) has spurred intensive studies of signals that live naturally on irregular data kernels described by graphs (e.g., social networks, wireless sensor networks). Though a digital image contains pixels that reside on a regularly sampled 2D grid, if one can design an appropriate underlying graph connecting pixels with weights that reflect the image structure, then one can interpret the image (or image patch) as a signal on a graph, and apply GSP tools for processing and analysis of the signal in graph spectral domain. In this article, we overview recent graph spectral techniques in GSP specifically for image / video processing. The topics covered include image compression, image restoration, image filtering and image segmentation

    Rotation Invariant on Harris Interest Points for Exposing Image Region Duplication Forgery

    Get PDF
    Nowadays, image forgery has become common because only an editing package software and a digital camera are required to counterfeit an image. Various fraud detection systems have been developed in accordance with the requirements of numerous applications and to address different types of image forgery. However, image fraud detection is a complicated process given that is necessary to identify the image processing tools used to counterfeit an image. Here, we describe recent developments in image fraud detection. Conventional techniques for detecting duplication forgeries have difficulty in detecting postprocessing falsification, such as grading and joint photographic expert group compression. This study proposes an algorithm that detects image falsification on the basis of Hessian features

    Multi-model SAR image despeckling

    Get PDF
    A multi-model despeckling approach for SAR image is presented. The chi-squared test is used to segment the image into homogeneous and heterogeneous regions. Then, the heterogeneous regions are separated into subregions, each of which consists of the points with same edge orientations. Homogeneous regions and the separated subregions are despeckled according to their characteristics. Experimental results are reported

    Feature extraction using MPEG-CDVS and Deep Learning with application to robotic navigation and image classification

    Get PDF
    The main contributions of this thesis are the evaluation of MPEG Compact Descriptor for Visual Search in the context of indoor robotic navigation and the introduction of a new method for training Convolutional Neural Networks with applications to object classification. The choice for image descriptor in a visual navigation system is not straightforward. Visual descriptors must be distinctive enough to allow for correct localisation while still offering low matching complexity and short descriptor size for real-time applications. MPEG Compact Descriptor for Visual Search is a low complexity image descriptor that offers several levels of compromises between descriptor distinctiveness and size. In this work, we describe how these trade-offs can be used for efficient loop-detection in a typical indoor environment. We first describe a probabilistic approach to loop detection based on the standard’s suggested similarity metric. We then evaluate the performance of CDVS compression modes in terms of matching speed, feature extraction, and storage requirements and compare them with the state of the art SIFT descriptor for five different types of indoor floors. During the second part of this thesis we focus on the new paradigm to machine learning and computer vision called Deep Learning. Under this paradigm visual features are no longer extracted using fine-grained, highly engineered feature extractor, but rather using a Convolutional Neural Networks (CNN) that extracts hierarchical features learned directly from data at the cost of long training periods. In this context, we propose a method for speeding up the training of Convolutional Neural Networks (CNN) by exploiting the spatial scaling property of convolutions. This is done by first training a pre-train CNN of smaller kernel resolutions for a few epochs, followed by properly rescaling its kernels to the target’s original dimensions and continuing training at full resolution. We show that the overall training time of a target CNN architecture can be reduced by exploiting the spatial scaling property of convolutions during early stages of learning. Moreover, by rescaling the kernels at different epochs, we identify a trade-off between total training time and maximum obtainable accuracy. Finally, we propose a method for choosing when to rescale kernels and evaluate our approach on recent architectures showing savings in training times of nearly 20% while test set accuracy is preserved

    Watermarking digital image and video data. A state-of-the-art overview

    Full text link

    The 1993 Space and Earth Science Data Compression Workshop

    Get PDF
    The Earth Observing System Data and Information System (EOSDIS) is described in terms of its data volume, data rate, and data distribution requirements. Opportunities for data compression in EOSDIS are discussed

    The quest for "diagnostically lossless" medical image compression using objective image quality measures

    Get PDF
    Given the explosive growth of digital image data being generated, medical communities worldwide have recognized the need for increasingly efficient methods of storage, display and transmission of medical images. For this reason lossy image compression is inevitable. Furthermore, it is absolutely essential to be able to determine the degree to which a medical image can be compressed before its “diagnostic quality” is compromised. This work aims to achieve “diagnostically lossless compression”, i.e., compression with no loss in visual quality nor diagnostic accuracy. Recent research by Koff et al. has shown that at higher compression levels lossy JPEG is more effective than JPEG2000 in some cases of brain and abdominal CT images. We have investigated the effects of the sharp skull edges in CT neuro images on JPEG and JPEG 2000 lossy compression. We provide an explanation why JPEG performs better than JPEG2000 for certain types of CT images. Another aspect of this study is primarily concerned with improved methods of assessing the diagnostic quality of compressed medical images. In this study, we have compared the performances of structural similarity (SSIM) index, mean squared error (MSE), compression ratio and JPEG quality factor, based on the data collected in a subjective experiment involving radiologists. An receiver operating characteristic (ROC) curve and Kolmogorov-Smirnov analyses indicate that compression ratio is not always a good indicator of visual quality. Moreover, SSIM demonstrates the best performance. We have also shown that a weighted Youden index can provide SSIM and MSE thresholds for acceptable compression. We have also proposed two approaches of modifying L2-based approximations so that they conform to Weber’s model of perception. We show that the imposition of a condition of perceptual invariance in greyscale space according to Weber’s model leads to the unique (unnormalized) measure with density function ρ(t) = 1/t. This result implies that the logarithmic L1 distance is the most natural “Weberized” image metric. We provide numerical implementations of the intensity-weighted approximation methods for natural and medical images
    corecore