4 research outputs found

    Results on the optimal memoryassisted universal compression performance for mixture sources

    Get PDF
    Abstract-In this paper, we consider the compression of a sequence from a mixture of K parametric sources. Each parametric source is represented by a d-dimensional parameter vector that is drawn from Jeffreys' prior. The output of the mixture source is a sequence of length n whose parameter is chosen from one of the K source parameter vectors uniformly at random. We are interested in the scenario in which the encoder and the decoder have a common side information of T sequences generated independently by the mixture source (which we refer to as memory-assisted universal compression problem). We derive the minimum average redundancy of the memoryassisted universal compression of a new random sequence from the mixture source and prove that when for some Ç« > 0, the side information provided by the previous sequences results in significant improvement over the universal compression without side information that is a function of n, T , and d. On the other hand, as K grows, the impact of the side information becomes negligible. Specifically, when for some Ç« > 0, optimal memory-assisted universal compression almost surely offers negligible improvement over the universal compression without side information

    Results on the optimal memory-assisted universal compression performance for mixture sources

    No full text

    Network compression via network memory: fundamental performance limits

    Get PDF
    The amount of information that is churned out daily around the world is staggering, and hence, future technological advancements are contingent upon development of scalable acquisition, inference, and communication mechanisms for this massive data. This Ph.D. dissertation draws upon mathematical tools from information theory and statistics to understand the fundamental performance limits of universal compression of this massive data at the packet level using universal compression just above layer 3 of the network when the intermediate network nodes are enabled with the capability of memorizing the previous traffic. Universality of compression imposes an inevitable redundancy (overhead) to the compression performance of universal codes, which is due to the learning of the unknown source statistics. In this work, the previous asymptotic results about the redundancy of universal compression are generalized to consider the performance of universal compression at the finite-length regime (that is applicable to small network packets). Further, network compression via memory is proposed as a compression-based solution for the compression of relatively small network packets whenever the network nodes (i.e., the encoder and the decoder) are equipped with memory and have access to massive amounts of previous communication. In a nutshell, network compression via memory learns the patterns and statistics of the payloads of the packets and uses it for compression and reduction of the traffic. Network compression via memory, with the cost of increasing the computational overhead in the network nodes, significantly reduces the transmission cost in the network. This leads to huge performance improvement as the cost of transmitting one bit is by far greater than the cost of processing it.Ph.D
    corecore