7,301 research outputs found

    Joint universal lossy coding and identification of stationary mixing sources with general alphabets

    Full text link
    We consider the problem of joint universal variable-rate lossy coding and identification for parametric classes of stationary β\beta-mixing sources with general (Polish) alphabets. Compression performance is measured in terms of Lagrangians, while identification performance is measured by the variational distance between the true source and the estimated source. Provided that the sources are mixing at a sufficiently fast rate and satisfy certain smoothness and Vapnik-Chervonenkis learnability conditions, it is shown that, for bounded metric distortions, there exist universal schemes for joint lossy compression and identification whose Lagrangian redundancies converge to zero as Vnlogn/n\sqrt{V_n \log n /n} as the block length nn tends to infinity, where VnV_n is the Vapnik-Chervonenkis dimension of a certain class of decision regions defined by the nn-dimensional marginal distributions of the sources; furthermore, for each nn, the decoder can identify nn-dimensional marginal of the active source up to a ball of radius O(Vnlogn/n)O(\sqrt{V_n\log n/n}) in variational distance, eventually with probability one. The results are supplemented by several examples of parametric sources satisfying the regularity conditions.Comment: 16 pages, 1 figure; accepted to IEEE Transactions on Information Theor

    Statistical mechanics of lossy data compression using a non-monotonic perceptron

    Full text link
    The performance of a lossy data compression scheme for uniformly biased Boolean messages is investigated via methods of statistical mechanics. Inspired by a formal similarity to the storage capacity problem in the research of neural networks, we utilize a perceptron of which the transfer function is appropriately designed in order to compress and decode the messages. Employing the replica method, we analytically show that our scheme can achieve the optimal performance known in the framework of lossy compression in most cases when the code length becomes infinity. The validity of the obtained results is numerically confirmed.Comment: 9 pages, 5 figures, Physical Review

    Efficient LDPC Codes over GF(q) for Lossy Data Compression

    Full text link
    In this paper we consider the lossy compression of a binary symmetric source. We present a scheme that provides a low complexity lossy compressor with near optimal empirical performance. The proposed scheme is based on b-reduced ultra-sparse LDPC codes over GF(q). Encoding is performed by the Reinforced Belief Propagation algorithm, a variant of Belief Propagation. The computational complexity at the encoder is O(.n.q.log q), where is the average degree of the check nodes. For our code ensemble, decoding can be performed iteratively following the inverse steps of the leaf removal algorithm. For a sparse parity-check matrix the number of needed operations is O(n).Comment: 5 pages, 3 figure

    JPEG2000 Image Compression on Solar EUV Images

    Get PDF
    For future solar missions as well as ground-based telescopes, efficient ways to return and process data have become increasingly important. Solar Orbiter, e.g., which is the next ESA/NASA mission to explore the Sun and the heliosphere, is a deep-space mission, which implies a limited telemetry rate that makes efficient onboard data compression a necessity to achieve the mission science goals. Missions like the Solar Dynamics Observatory (SDO) and future ground-based telescopes such as the Daniel K. Inouye Solar Telescope, on the other hand, face the challenge of making petabyte-sized solar data archives accessible to the solar community. New image compression standards address these challenges by implementing efficient and flexible compression algorithms that can be tailored to user requirements. We analyse solar images from the Atmospheric Imaging Assembly (AIA) instrument onboard SDO to study the effect of lossy JPEG2000 (from the Joint Photographic Experts Group 2000) image compression at different bit rates. To assess the quality of compressed images, we use the mean structural similarity (MSSIM) index as well as the widely used peak signal-to-noise ratio (PSNR) as metrics and compare the two in the context of solar EUV images. In addition, we perform tests to validate the scientific use of the lossily compressed images by analysing examples of an on-disk and off-limb coronal-loop oscillation time-series observed by AIA/SDO.Comment: 25 pages, published in Solar Physic
    corecore