6 research outputs found

    A Strong Data Processing Inequality for Thinning Poisson Processes and Some Applications

    Get PDF
    International audienceThis paper derives a simple strong data processing inequality (DPI) for Poisson processes: after a Poisson process is passed through p-thinning—in which every arrival remains in the process with probability p and is erased otherwise, independently of the other points—the mutual information between the Poisson process and any other random variable is reduced to no more than p times its original value. This strong DPI is applied to prove tight converse bounds in several problems: a hypothesis test with communication constraints, a mutual information game, and a CEO problem

    Address-Event Variable-Length Compression for Time-Encoded Data

    Full text link
    Time-encoded signals, such as social network update logs and spiking traces in neuromorphic processors, are defined by multiple traces carrying information in the timing of events, or spikes. When time-encoded data is processed at a remote site with respect to the location it is produced, the occurrence of events needs to be encoded and transmitted in a timely fashion. The standard Address-Event Representation (AER) protocol for neuromorphic chips encodes the indices of the "spiking" traces in the payload of a packet produced at the same time the events are recorded, hence implicitly encoding the events' timing in the timing of the packet. This paper investigates the potential bandwidth saving that can be obtained by carrying out variable-length compression of packets' payloads. Compression leverages both intra-trace and inter-trace correlations over time that are typical in applications such as social networks or neuromorphic computing. The approach is based on discrete-time Hawkes processes and entropy coding with conditional codebooks. Results from an experiment based on a real-world retweet dataset are also provided.Comment: submitte

    Covering Point Patterns

    No full text
    International audienceA source generates a point pattern consisting of a finite number of points in an interval. Based on a binary description of the point pattern, a reconstructor must produce a covering set that is guaranteed to contain the pattern. We study the optimal tradeoff (as the length of the interval tends to infinity) between the description length and the least average Lebesgue measure of the covering set. The tradeoff is established for point patterns that are generated by homogeneous and inhomogeneous Poisson processes. The homogeneous Poisson process is shown to be the most difficult to describe among all point patterns. We also study a Wyner-Ziv version of this problem, where some of the points in the pattern are revealed to the reconstructor but not to the encoder. We show that this scenario is as good as when they are revealed to both encoder and reconstructor. A connection between this problem and the queueing distortion is established via feedforward. Finally, we establish the aforementioned tradeoff when the covering set is allowed to miss some of the points in the pattern at a certain cost

    2011 IEEE International Symposium on Information Theory Proceedings Covering Point Patterns

    No full text
    Abstract—A source generates a “point pattern ” consisting of a finite number of points in an interval. Based on a binary description of the point pattern, a reconstructor must produce a “covering set ” that is guaranteed to contain the pattern. We study the optimal trade-off (as the length of the interval tends to infinity) between the description length and the least average Lebesgue measure of the covering set. The trade-off is established for point patterns that are generated by a Poisson process. Such point patterns are shown to be the most difficult to describe. We also study a Wyner-Ziv version of this problem, where some of the points in the pattern are known to the reconstructor but not to the encoder. We show that this scenario is as good as when they are known to both encoder and reconstructor. I
    corecore