4,225 research outputs found

    Matrix-Monotonic Optimization for MIMO Systems

    Full text link
    For MIMO systems, due to the deployment of multiple antennas at both the transmitter and the receiver, the design variables e.g., precoders, equalizers, training sequences, etc. are usually matrices. It is well known that matrix operations are usually more complicated compared to their vector counterparts. In order to overcome the high complexity resulting from matrix variables, in this paper we investigate a class of elegant multi-objective optimization problems, namely matrix-monotonic optimization problems (MMOPs). In our work, various representative MIMO optimization problems are unified into a framework of matrix-monotonic optimization, which includes linear transceiver design, nonlinear transceiver design, training sequence design, radar waveform optimization, the corresponding robust design and so on as its special cases. Then exploiting the framework of matrix-monotonic optimization the optimal structures of the considered matrix variables can be derived first. Based on the optimal structure, the matrix-variate optimization problems can be greatly simplified into the ones with only vector variables. In particular, the dimension of the new vector variable is equal to the minimum number of columns and rows of the original matrix variable. Finally, we also extend our work to some more general cases with multiple matrix variables.Comment: 37 Pages, 5 figures, IEEE Transactions on Signal Processing, Final Versio

    Bounds on inference

    Get PDF
    Lower bounds for the average probability of error of estimating a hidden variable X given an observation of a correlated random variable Y, and Fano's inequality in particular, play a central role in information theory. In this paper, we present a lower bound for the average estimation error based on the marginal distribution of X and the principal inertias of the joint distribution matrix of X and Y. Furthermore, we discuss an information measure based on the sum of the largest principal inertias, called k-correlation, which generalizes maximal correlation. We show that k-correlation satisfies the Data Processing Inequality and is convex in the conditional distribution of Y given X. Finally, we investigate how to answer a fundamental question in inference and privacy: given an observation Y, can we estimate a function f(X) of the hidden random variable X with an average error below a certain threshold? We provide a general method for answering this question using an approach based on rate-distortion theory.Comment: Allerton 2013 with extended proof, 10 page
    • …
    corecore