1,105 research outputs found

    Fundamental limits of distributed tracking

    Get PDF
    Consider the following communication scenario. An n-dimensional source with memory is observed by K isolated encoders via parallel channels, who causally compress their observations to transmit to the decoder via noiseless rate-constrained links. At each time instant, the decoder receives K new codewords from the observers, combines them with the past received codewords, and produces a minimum- distortion estimate of the latest block of n source symbols. This scenario extends the classical one-shot CEO problem to multiple rounds of communication with communicators maintaining memory of the past.We prove a coding theorem showing that the minimum asymptotically (as n → ∞) achievable sum rate required to achieve a target distortion is equal to the directed mutual information from the observers to the decoder minimized subject to the distortion constraint and the separate encoding constraint. For the Gauss-Markov source observed via K parallel AWGN channels, we solve that minimal directed mutual information problem, thereby establishing the minimum asymptotically achievable sum rate. Finally, we explicitly bound the rate loss due to a lack of communication among the observers; that bound is attained with equality in the case of identical observation channels.The general coding theorem is proved via a new nonasymptotic bound that uses stochastic likelihood coders and whose asymptotic analysis yields an extension of the Berger-Tung inner bound to the causal setting. The analysis of the Gaussian case is facilitated by reversing the channels of the observers

    Coherent Asset Allocation and Diversification in the Presence of Stress Events

    Get PDF
    We propose a method to integrate frequentist and subjective probabilities in order to obtain a coherent asset allocation in the presence of stress events. Our working assumption is that in normal market asset returns are sufficiently regular for frequentist statistical techniques to identify their joint distribution, once the outliers have been removed from the data set. We also argue, however, that the exceptional events facing the portfolio manager at any point in time are specific to the each individual crisis, and that past regularities cannot be relied upon. We therefore deal with exceptional returns by eliciting subjective probabilities, and by employing the Bayesian net technology to ensure logical consistency. The portfolio allocation is then obtained by utility maximization over the combined (normal plus exceptional) distribution of returns. We show the procedure in detail in a stylized case.Stress tests, asset allocation, Bayesian Networks

    Rate-Exponent Region for a Class of Distributed Hypothesis Testing Against Conditional Independence Problems

    Full text link
    We study a class of KK-encoder hypothesis testing against conditional independence problems. Under the criterion that stipulates minimization of the Type II error subject to a (constant) upper bound ϵ\epsilon on the Type I error, we characterize the set of encoding rates and exponent for both discrete memoryless and memoryless vector Gaussian settings. For the DM setting, we provide a converse proof and show that it is achieved using the Quantize-Bin-Test scheme of Rahman and Wagner. For the memoryless vector Gaussian setting, we develop a tight outer bound by means of a technique that relies on the de Bruijn identity and the properties of Fisher information. In particular, the result shows that for memoryless vector Gaussian sources the rate-exponent region is exhausted using the Quantize-Bin-Test scheme with \textit{Gaussian} test channels; and there is \textit{no} loss in performance caused by restricting the sensors' encoders not to employ time sharing. Furthermore, we also study a variant of the problem in which the source, not necessarily Gaussian, has finite differential entropy and the sensors' observations noises under the null hypothesis are Gaussian. For this model, our main result is an upper bound on the exponent-rate function. The bound is shown to mirror a corresponding explicit lower bound, except that the lower bound involves the source power (variance) whereas the upper bound has the source entropy power. Part of the utility of the established bound is for investigating asymptotic exponent/rates and losses incurred by distributed detection as function of the number of sensors.Comment: Submitted for publication to the IEEE Transactions of Information Theory. arXiv admin note: substantial text overlap with arXiv:1904.03028, arXiv:1811.0393
    corecore