70 research outputs found

    On Ergodic Secrecy Capacity for Gaussian MISO Wiretap Channels

    Full text link
    A Gaussian multiple-input single-output (MISO) wiretap channel model is considered, where there exists a transmitter equipped with multiple antennas, a legitimate receiver and an eavesdropper each equipped with a single antenna. We study the problem of finding the optimal input covariance that achieves ergodic secrecy capacity subject to a power constraint where only statistical information about the eavesdropper channel is available at the transmitter. This is a non-convex optimization problem that is in general difficult to solve. Existing results address the case in which the eavesdropper or/and legitimate channels have independent and identically distributed Gaussian entries with zero-mean and unit-variance, i.e., the channels have trivial covariances. This paper addresses the general case where eavesdropper and legitimate channels have nontrivial covariances. A set of equations describing the optimal input covariance matrix are proposed along with an algorithm to obtain the solution. Based on this framework, we show that when full information on the legitimate channel is available to the transmitter, the optimal input covariance has always rank one. We also show that when only statistical information on the legitimate channel is available to the transmitter, the legitimate channel has some general non-trivial covariance, and the eavesdropper channel has trivial covariance, the optimal input covariance has the same eigenvectors as the legitimate channel covariance. Numerical results are presented to illustrate the algorithm.Comment: 27 pages, 10 figure

    Three essays on financial economics

    Get PDF

    On Cooperative Beamforming Based on Second-Order Statistics of Channel State Information

    Full text link
    Cooperative beamforming in relay networks is considered, in which a source transmits to its destination with the help of a set of cooperating nodes. The source first transmits locally. The cooperating nodes that receive the source signal retransmit a weighted version of it in an amplify-and-forward (AF) fashion. Assuming knowledge of the second-order statistics of the channel state information, beamforming weights are determined so that the signal-to-noise ratio (SNR) at the destination is maximized subject to two different power constraints, i.e., a total (source and relay) power constraint, and individual relay power constraints. For the former constraint, the original problem is transformed into a problem of one variable, which can be solved via Newton's method. For the latter constraint, the original problem is transformed into a homogeneous quadratically constrained quadratic programming (QCQP) problem. In this case, it is shown that when the number of relays does not exceed three the global solution can always be constructed via semidefinite programming (SDP) relaxation and the matrix rank-one decomposition technique. For the cases in which the SDP relaxation does not generate a rank one solution, two methods are proposed to solve the problem: the first one is based on the coordinate descent method, and the second one transforms the QCQP problem into an infinity norm maximization problem in which a smooth finite norm approximation can lead to the solution using the augmented Lagrangian method.Comment: 30 pages, 9 figure

    Explicit Solution of Worst-Case Secrecy Rate for MISO Wiretap Channels with Spherical Uncertainty

    Full text link
    A multiple-input single-output (MISO) wiretap channel model is considered, that includes a multi-antenna transmitter, a single-antenna legitimate receiver and a single-antenna eavesdropper. For the scenario in which spherical uncertainty for both the legitimate and the eavesdropper channels is included, the problem of finding the optimal input covariance that maximizes the worst-case secrecy rate subject to a power constraint, is considered, and an explicit expression for the maximum worst-case secrecy rate is provided.Comment: 1 figure

    Learning Under Implicit Bias and Data Bias

    Get PDF
    Modern machine learning tasks often involve the training of over-parameterized models and the challenge of addressing data bias. However, despite recent advances, there remains a significant knowledge gap in these areas. This thesis aims to push the boundaries of our understanding by exploring the implicit bias of neural network training and proposing strategies for mitigating data bias in matrix completion. In the first result, we study the implicit regularization of gradient descent on a diagonally linear neural network with general depth-N under a realistic setting of noise and correlated designs. We characterize the impact of depth and early stopping and show that for a general depth parameter N, gradient descent with early stopping achieves minimax optimal sparse recovery with sufficiently small initialization and step size. In particular, we show that increasing depth enlarges the scale of working initialization and the early-stopping window so that this implicit sparse regularization effect is more likely to take place. Continuing our exploration of implicit bias, our second main result introduces a novel neural reparametrization known as the “diagonally grouped linear neural network”. This reparametriza-tion exhibits a fascinating property wherein gradient descent, operating on the squared regression loss without explicit regularization, biases towards solutions with a group sparsity structure. In contrast to many existing works in understanding implicit regularization, we prove that our train-ing trajectory cannot be simulated by mirror descent. Compared to existing bounds for implicit sparse regularization using diagonal linear networks, our analysis with the new reparameterization shows improved sample complexity in the general noise setting. In our third result, we propose a pseudolikelihood approach for matrix completion with in-formative missing. We focus on a flexible and generally applicable missing mechanism, which contains both ignorable and nonignorable missing as special cases. We show that the regularized pairwise pseudolikelihood estimator can recover the low-rank matrix up to a constant shift and scaling while effectively mitigating the impact of data bias
    corecore