9 research outputs found

    Zero-Delay Rate Distortion via Filtering for Vector-Valued Gaussian Sources

    Full text link
    We deal with zero-delay source coding of a vector-valued Gauss-Markov source subject to a mean-squared error (MSE) fidelity criterion characterized by the operational zero-delay vector-valued Gaussian rate distortion function (RDF). We address this problem by considering the nonanticipative RDF (NRDF) which is a lower bound to the causal optimal performance theoretically attainable (OPTA) function and operational zero-delay RDF. We recall the realization that corresponds to the optimal "test-channel" of the Gaussian NRDF, when considering a vector Gauss-Markov source subject to a MSE distortion in the finite time horizon. Then, we introduce sufficient conditions to show existence of solution for this problem in the infinite time horizon. For the asymptotic regime, we use the asymptotic characterization of the Gaussian NRDF to provide a new equivalent realization scheme with feedback which is characterized by a resource allocation (reverse-waterfilling) problem across the dimension of the vector source. We leverage the new realization to derive a predictive coding scheme via lattice quantization with subtractive dither and joint memoryless entropy coding. This coding scheme offers an upper bound to the operational zero-delay vector-valued Gaussian RDF. When we use scalar quantization, then for "r" active dimensions of the vector Gauss-Markov source the gap between the obtained lower and theoretical upper bounds is less than or equal to 0.254r + 1 bits/vector. We further show that it is possible when we use vector quantization, and assume infinite dimensional Gauss-Markov sources to make the previous gap to be negligible, i.e., Gaussian NRDF approximates the operational zero-delay Gaussian RDF. We also extend our results to vector-valued Gaussian sources of any finite memory under mild conditions. Our theoretical framework is demonstrated with illustrative numerical experiments.Comment: 32 pages, 9 figures, published in IEEE Journal of Selected Topics in Signal Processin

    Optimal Estimation via Nonanticipative Rate Distortion Function and Applications to Time-Varying Gauss-Markov Processes

    Full text link
    In this paper, we develop {finite-time horizon} causal filters using the nonanticipative rate distortion theory. We apply the {developed} theory to {design optimal filters for} time-varying multidimensional Gauss-Markov processes, subject to a mean square error fidelity constraint. We show that such filters are equivalent to the design of an optimal \texttt{\{encoder, channel, decoder\}}, which ensures that the error satisfies {a} fidelity constraint. Moreover, we derive a universal lower bound on the mean square error of any estimator of time-varying multidimensional Gauss-Markov processes in terms of conditional mutual information. Unlike classical Kalman filters, the filter developed is characterized by a reverse-waterfilling algorithm, which ensures {that} the fidelity constraint is satisfied. The theoretical results are demonstrated via illustrative examples.Comment: 35 pages, 6 figures, submitted for publication in SIAM Journal on Control and Optimization (SICON

    Bounds on the Sum-Rate of MIMO Causal Source Coding Systems with Memory under Spatio-Temporal Distortion Constraints

    Get PDF
    In this paper, we derive lower and upper bounds on the OPTA of a two-user multi-input multi-output (MIMO) causal encoding and causal decoding problem. Each user’s source model is described by a multidimensional Markov source driven by additive i.i.d. noise process subject to three classes of spatio-temporal distortion constraints. To characterize the lower bounds, we use state augmentation techniques and a data processing theorem, which recovers a variant of rate distortion function as an information measure known in the literature as nonanticipatory ϵ-entropy, sequential or nonanticipative RDF. We derive lower bound characterizations for a system driven by an i.i.d. Gaussian noise process, which we solve using the SDP algorithm for all three classes of distortion constraints. We obtain closed form solutions when the system’s noise is possibly non-Gaussian for both users and when only one of the users is described by a source model driven by a Gaussian noise process. To obtain the upper bounds, we use the best linear forward test channel realization that corresponds to the optimal test channel realization when the system is driven by a Gaussian noise process and apply a sequential causal DPCM-based scheme with a feedback loop followed by a scaled ECDQ scheme that leads to upper bounds with certain performance guarantees. Then, we use the linear forward test channel as a benchmark to obtain upper bounds on the OPTA, when the system is driven by an additive i.i.d. non-Gaussian noise process. We support our framework with various simulation studies

    Rate-Cost Tradeoffs in Control

    Get PDF
    Consider a control problem with a communication channel connecting the observer of a linear stochastic system to the controller. The goal of the controller is to minimize a quadratic cost function in the state variables and control signal, known as the linear quadratic regulator (LQR). We study the fundamental tradeoff between the communication rate r bits/sec and the expected cost b. We obtain a lower bound on a certain rate-cost function, which quantifies the minimum directed mutual information between the channel input and output that is compatible with a target LQR cost. The rate-cost function has operational significance in multiple scenarios of interest: among others, it allows us to lower-bound the minimum communication rate for fixed and variable length quantization, and for control over noisy channels. We derive an explicit lower bound to the rate-cost function, which applies to the vector, non-Gaussian, and partially observed systems, thereby extending and generalizing an earlier explicit expression for the scalar Gaussian system, due to Tatikonda el al. [2]. The bound applies as long as the differential entropy of the system noise is not −∞ . It can be closely approached by a simple lattice quantization scheme that only quantizes the innovation, that is, the difference between the controller's belief about the current state and the true state. Via a separation principle between control and communication, similar results hold for causal lossy compression of additive noise Markov sources. Apart from standard dynamic programming arguments, our technical approach leverages the Shannon lower bound, develops new estimates for data compression with coding memory, and uses some recent results on high resolution variablelength vector quantization to prove that the new converse bounds are tight

    Rate-cost tradeoffs in control

    Get PDF
    Consider a distributed control problem with a communication channel connecting the observer of a linear stochastic system to the controller. The goal of the controller is minimize a quadratic cost function. The most basic special case of that cost function is the mean-square deviation of the system state from the desired state. We study the fundamental tradeoff between the communication rate r bits/sec and the limsup of the expected cost b, and show a lower bound on the rate necessary to attain b. The bound applies as long as the system noise has a probability density function. If target cost b is not too large, that bound can be closely approached by a simple lattice quantization scheme that only quantizes the innovation, that is, the difference between the controller's belief about the current state and the true state

    Rate-cost tradeoffs in control

    Get PDF
    Consider a distributed control problem with a communication channel connecting the observer of a linear stochastic system to the controller. The goal of the controller is minimize a quadratic cost function. The most basic special case of that cost function is the mean-square deviation of the system state from the desired state. We study the fundamental tradeoff between the communication rate r bits/sec and the limsup of the expected cost b, and show a lower bound on the rate necessary to attain b. The bound applies as long as the system noise has a probability density function. If target cost b is not too large, that bound can be closely approached by a simple lattice quantization scheme that only quantizes the innovation, that is, the difference between the controller's belief about the current state and the true state

    Asymptotic Reverse-Waterfilling Characterization of Nonanticipative Rate Distortion Function of Vector-Valued Gauss-Markov Processes with MSE Distortion

    No full text
    We analyze the asymptotic nonanticipative rate distortion function (NRDF) of vector-valued Gauss-Markov processes subject to a mean-squared error (MSE) distortion function. We derive a parametric characterization in terms of a reverse-waterfilling algorithm, that requires the solution of a matrix Riccati algebraic equation (RAE). Further, we develop an algorithm reminiscent of the classical reverse-waterfilling algorithm that provides an upper bound to the optimal solution of the reverse-waterfilling optimization problem, and under certain cases, it operates at the NRDF. Moreover, using the characterization of the reverse-waterfilling algorithm, we derive the analytical solution of the NRDF, for a simple two-dimensional parallel Gauss-Markov process. The efficacy of our proposed algorithm is demonstrated via an example.Peer reviewe
    corecore