7,511 research outputs found

    Metric-Free Natural Gradient for Joint-Training of Boltzmann Machines

    Full text link
    This paper introduces the Metric-Free Natural Gradient (MFNG) algorithm for training Boltzmann Machines. Similar in spirit to the Hessian-Free method of Martens [8], our algorithm belongs to the family of truncated Newton methods and exploits an efficient matrix-vector product to avoid explicitely storing the natural gradient metric LL. This metric is shown to be the expected second derivative of the log-partition function (under the model distribution), or equivalently, the variance of the vector of partial derivatives of the energy function. We evaluate our method on the task of joint-training a 3-layer Deep Boltzmann Machine and show that MFNG does indeed have faster per-epoch convergence compared to Stochastic Maximum Likelihood with centering, though wall-clock performance is currently not competitive

    Beamforming and Rate Allocation in MISO Cognitive Radio Networks

    Full text link
    We consider decentralized multi-antenna cognitive radio networks where secondary (cognitive) users are granted simultaneous spectrum access along with license-holding (primary) users. We treat the problem of distributed beamforming and rate allocation for the secondary users such that the minimum weighted secondary rate is maximized. Such an optimization is subject to (1) a limited weighted sum-power budget for the secondary users and (2) guaranteed protection for the primary users in the sense that the interference level imposed on each primary receiver does not exceed a specified level. Based on the decoding method deployed by the secondary receivers, we consider three scenarios for solving this problem. In the first scenario each secondary receiver decodes only its designated transmitter while suppressing the rest as Gaussian interferers (single-user decoding). In the second case each secondary receiver employs the maximum likelihood decoder (MLD) to jointly decode all secondary transmissions, and in the third one each secondary receiver uses the unconstrained group decoder (UGD). By deploying the UGD, each secondary user is allowed to decode any arbitrary subset of users (which contains its designated user) after suppressing or canceling the remaining users.Comment: 32 pages, 6 figure

    A Parametric Simplex Algorithm for Linear Vector Optimization Problems

    Get PDF
    In this paper, a parametric simplex algorithm for solving linear vector optimization problems (LVOPs) is presented. This algorithm can be seen as a variant of the multi-objective simplex (Evans-Steuer) algorithm [12]. Different from it, the proposed algorithm works in the parameter space and does not aim to find the set of all efficient solutions. Instead, it finds a solution in the sense of Loehne [16], that is, it finds a subset of efficient solutions that allows to generate the whole frontier. In that sense, it can also be seen as a generalization of the parametric self-dual simplex algorithm, which originally is designed for solving single objective linear optimization problems, and is modified to solve two objective bounded LVOPs with the positive orthant as the ordering cone in Ruszczynski and Vanderbei [21]. The algorithm proposed here works for any dimension, any solid pointed polyhedral ordering cone C and for bounded as well as unbounded problems. Numerical results are provided to compare the proposed algorithm with an objective space based LVOP algorithm (Benson algorithm in [13]), that also provides a solution in the sense of [16], and with Evans-Steuer algorithm [12]. The results show that for non-degenerate problems the proposed algorithm outperforms Benson algorithm and is on par with Evan-Steuer algorithm. For highly degenerate problems Benson's algorithm [13] excels the simplex-type algorithms; however, the parametric simplex algorithm is for these problems computationally much more efficient than Evans-Steuer algorithm.Comment: 27 pages, 4 figures, 5 table
    corecore