9,631 research outputs found

    Characterizing the effect of spatial translations on JPEG-compressed images

    Get PDF
    The JPEG Still Image Data Compression Standard is one of the most pervasive digital image compression schemes in use today. JPEG is especially suited to digitized photograph archiving, and lately has grown popular in industry as a standard for handling images on networks and the Internet. The emergence of new applications and image file formats (FlashPix) that use JPEG compression is allowing developers and software users to create applications that retrieve, manipulate, and store images in databases located on the Internet. In most cases, these applications are dealing with JPEG images, or a format that uses JPEG as the compression scheme, as is the case with the FlashPix format. Some of these new applications are allowing clients (web users) to enhance and manipulate these downloaded images, and then restore them on these image repository databases. Unfortunately, with JPEG compression, there is a cost that comes with compression and decompression. Blocking artifacts will always result when using JPEG to compress images, and the artifacts become worse with lower quality levels of compression, or higher compression ratios. These artifacts usually are acceptable if a little care is taken in choosing the quality factor, however, other degradation occurs whenever an image is translated, or shifted by a few pixels, in a horizontal or vertical direction. Translation can happen very easily if the image is cropped or otherwise moved. When such an image is recompressed, additional error can cause substantial artifacts that would be absent if the image was not moved. If the JPEG scheme could be modified to recognize the translations by compensating for the crop or translation, the artifacts due to this translation would be eliminated. The proposed research will attempt to characterize the error that results from such translations

    Wavelet compression techniques for computer network measurements

    Get PDF
    Wavelet transform is a recent signal analysis tool that is already been successfully used in image, video and speech compression applications. This paper looks at the Wavelet transform as a method of compressing computer network measurements produced from high-speed networks. Such networks produce a large amount of information over a long period of time, requiring compression for archiving. An important aspect of the compression is to maintain the quality in important features of signals. In this paper two known wavelet coefficient threshold selection techniques are examined and utilized separately along with an efficient method for storing wavelet coefficients. Experimental results are obtained to compare the behaviour of the two threshold selection schemes on delay and data rate signals, by using the mean square error (MSE) statistic, PSNR and the file size of the compressed output

    Digital forensics formats: seeking a digital preservation storage format for web archiving

    Get PDF
    In this paper we discuss archival storage formats from the point of view of digital curation and preservation. Considering established approaches to data management as our jumping off point, we selected seven format attributes which are core to the long term accessibility of digital materials. These we have labeled core preservation attributes. These attributes are then used as evaluation criteria to compare file formats belonging to five common categories: formats for archiving selected content (e.g. tar, WARC), disk image formats that capture data for recovery or installation (partimage, dd raw image), these two types combined with a selected compression algorithm (e.g. tar+gzip), formats that combine packing and compression (e.g. 7-zip), and forensic file formats for data analysis in criminal investigations (e.g. aff, Advanced Forensic File format). We present a general discussion of the file format landscape in terms of the attributes we discuss, and make a direct comparison between the three most promising archival formats: tar, WARC, and aff. We conclude by suggesting the next steps to take the research forward and to validate the observations we have made

    VXA: A Virtual Architecture for Durable Compressed Archives

    Full text link
    Data compression algorithms change frequently, and obsolete decoders do not always run on new hardware and operating systems, threatening the long-term usability of content archived using those algorithms. Re-encoding content into new formats is cumbersome, and highly undesirable when lossy compression is involved. Processor architectures, in contrast, have remained comparatively stable over recent decades. VXA, an archival storage system designed around this observation, archives executable decoders along with the encoded content it stores. VXA decoders run in a specialized virtual machine that implements an OS-independent execution environment based on the standard x86 architecture. The VXA virtual machine strictly limits access to host system services, making decoders safe to run even if an archive contains malicious code. VXA's adoption of a "native" processor architecture instead of type-safe language technology allows reuse of existing "hand-optimized" decoders in C and assembly language, and permits decoders access to performance-enhancing architecture features such as vector processing instructions. The performance cost of VXA's virtualization is typically less than 15% compared with the same decoders running natively. The storage cost of archived decoders, typically 30-130KB each, can be amortized across many archived files sharing the same compression method.Comment: 14 pages, 7 figures, 2 table

    Statistical lossless compression of space imagery and general data in a reconfigurable architecture

    Get PDF

    Study of on-board compression of earth resources data

    Get PDF
    The current literature on image bandwidth compression was surveyed and those methods relevant to compression of multispectral imagery were selected. Typical satellite multispectral data was then analyzed statistically and the results used to select a smaller set of candidate bandwidth compression techniques particularly relevant to earth resources data. These were compared using both theoretical analysis and simulation, under various criteria of optimality such as mean square error (MSE), signal-to-noise ratio, classification accuracy, and computational complexity. By concatenating some of the most promising techniques, three multispectral data compression systems were synthesized which appear well suited to current and future NASA earth resources applications. The performance of these three recommended systems was then examined in detail by all of the above criteria. Finally, merits and deficiencies were summarized and a number of recommendations for future NASA activities in data compression proposed

    Audiovisual preservation strategies, data models and value-chains

    No full text
    This is a report on preservation strategies, models and value-chains for digital file-based audiovisual content. The report includes: (a)current and emerging value-chains and business-models for audiovisual preservation;(b) a comparison of preservation strategies for audiovisual content including their strengths and weaknesses, and(c) a review of current preservation metadata models, and requirements for extension to support audiovisual files

    PDF/A standard for long term archiving

    Get PDF
    PDF/A is defined by ISO 19005-1 as a file format based on PDF format. The standard provides a mechanism for representing electronic documents in a way that preserves their visual appearance over time, independent of the tools and systems used for creating or storing the files.Comment: 8 pages, exposed on 5th International Conference "Actualities and Perspectives on Hardware and Software" - APHS2009, Timisoara, Romani

    Digital archives: essential elements in the workflow for endangered languages documentation

    Get PDF

    A process for the accurate reconstruction of pre-filtered and compressed digital aerial images

    Get PDF
    The study of compression and decompression methods is crucial for storage and/or transmission of large numbers of image data which is required for archiving aerial photographs, satellite images and digital ortho-photos. Hence, the proposed work aims to increment the compression ratio (CR) of digital images in general. While emphasis is made on aerial images, the same principle may find applications to other types of raster based images. The process described here involves the application of pre-defined low-pass filters (i.e. kernels) prior to applying standard image compression encoders. Low-pass filters have the effect of increasing the dependence between neighbouring pixels which can be used to improve the CR. However, for this pre-filtering process to be considered as a compression instrument, it should allow for the original image to be accurately restored from its filtered counterpart. The development of the restoration process presented in this study is based on the theory of least squares and assumes the knowledge of the filtered image and the low-pass filter applied to the original image. The process is a variant of a super-resolution algorithm previously described, but its application and adaptation to the filtering and restoration of images, in this case (but not exclusively) aerial imagery, using a number of scales and filter dimensions is the expansion detailed here. An example of the proposed process is detailed in the ensuing sections. The example is also indicative of the degree of accuracy that can be attained upon applying this process to gray-scale images of different entropies and coded in a lossy or lossless mode
    • …
    corecore