3,330,512 research outputs found

    Barrier Frank-Wolfe for Marginal Inference

    Full text link
    We introduce a globally-convergent algorithm for optimizing the tree-reweighted (TRW) variational objective over the marginal polytope. The algorithm is based on the conditional gradient method (Frank-Wolfe) and moves pseudomarginals within the marginal polytope through repeated maximum a posteriori (MAP) calls. This modular structure enables us to leverage black-box MAP solvers (both exact and approximate) for variational inference, and obtains more accurate results than tree-reweighted algorithms that optimize over the local consistency relaxation. Theoretically, we bound the sub-optimality for the proposed algorithm despite the TRW objective having unbounded gradients at the boundary of the marginal polytope. Empirically, we demonstrate the increased quality of results found by tightening the relaxation over the marginal polytope as well as the spanning tree polytope on synthetic and real-world instances.Comment: 25 pages, 12 figures, To appear in Neural Information Processing Systems (NIPS) 2015, Corrected reference and cleaned up bibliograph

    Variance Reduced Stochastic Gradient Descent with Neighbors

    Full text link
    Stochastic Gradient Descent (SGD) is a workhorse in machine learning, yet its slow convergence can be a computational bottleneck. Variance reduction techniques such as SAG, SVRG and SAGA have been proposed to overcome this weakness, achieving linear convergence. However, these methods are either based on computations of full gradients at pivot points, or on keeping per data point corrections in memory. Therefore speed-ups relative to SGD may need a minimal number of epochs in order to materialize. This paper investigates algorithms that can exploit neighborhood structure in the training data to share and re-use information about past stochastic gradients across data points, which offers advantages in the transient optimization phase. As a side-product we provide a unified convergence analysis for a family of variance reduction algorithms, which we call memorization algorithms. We provide experimental results supporting our theory.Comment: Appears in: Advances in Neural Information Processing Systems 28 (NIPS 2015). 13 page

    Information, information processing and gravity

    Full text link
    I discuss fundamental limits placed on information and information processing by gravity. Such limits arise because both information and its processing require energy, while gravitational collapse (formation of a horizon or black hole) restricts the amount of energy allowed in a finite region. Specifically, I use a criterion for gravitational collapse called the hoop conjecture. Once the hoop conjecture is assumed a number of results can be obtained directly: the existence of a fundamental uncertainty in spatial distance of order the Planck length, bounds on information (entropy) in a finite region, and a bound on the rate of information processing in a finite region. In the final section I discuss some cosmological issues related to the total amount of information in the universe, and note that almost all detailed aspects of the late universe are determined by the randomness of quantum outcomes. This paper is based on a talk presented at a 2007 Bellairs Research Institute (McGill University) workshop on black holes and quantum information.Comment: 7 pages, 5 figures, revte
    corecore