572 research outputs found

    On starting and stopping criteria for nested primal-dual iterations

    Full text link
    The importance of an adequate inner loop starting point (as opposed to a sufficient inner loop stopping rule) is discussed in the context of a numerical optimization algorithm consisting of nested primal-dual proximal-gradient iterations. While the number of inner iterations is fixed in advance, convergence of the whole algorithm is still guaranteed by virtue of a warm-start strategy for the inner loop, showing that inner loop "starting rules" can be just as effective as "stopping rules" for guaranteeing convergence. The algorithm itself is applicable to the numerical solution of convex optimization problems defined by the sum of a differentiable term and two possibly non-differentiable terms. One of the latter terms should take the form of the composition of a linear map and a proximable function, while the differentiable term needs an accessible gradient. The algorithm reduces to the classical proximal gradient algorithm in certain special cases and it also generalizes other existing algorithms. In addition, under some conditions of strong convexity, we show a linear rate of convergence.Comment: 18 pages, no figure

    Volume rendering with multidimensional peak finding

    Get PDF
    Journal ArticlePeak finding provides more accurate classification for direct volume rendering by sampling directly at local maxima in a transfer function, allowing for better reproduction of high-frequency features. However, the 1D peak finding technique does not extend to higherdimensional classification. In this work, we develop a new method for peak finding with multidimensional transfer functions, which looks for peaks along the image of the ray. We use piecewise approximations to dynamically sample in transfer function space between world-space samples. As with unidimensional peak finding, this approach is useful for specifying transfer functions with greater precision, and for accurately rendering noisy volume data at lower sampling rates. Multidimensional peak finding produces comparable image quality with order-of-magnitude better performance, and can reproduce features omitted entirely by standard classification. With no precomputation or storage requirements, it is an attractive alternative to preintegration for multidimensional transfer functions

    Bethe Projections for Non-Local Inference

    Full text link
    Many inference problems in structured prediction are naturally solved by augmenting a tractable dependency structure with complex, non-local auxiliary objectives. This includes the mean field family of variational inference algorithms, soft- or hard-constrained inference using Lagrangian relaxation or linear programming, collective graphical models, and forms of semi-supervised learning such as posterior regularization. We present a method to discriminatively learn broad families of inference objectives, capturing powerful non-local statistics of the latent variables, while maintaining tractable and provably fast inference using non-Euclidean projected gradient descent with a distance-generating function given by the Bethe entropy. We demonstrate the performance and flexibility of our method by (1) extracting structured citations from research papers by learning soft global constraints, (2) achieving state-of-the-art results on a widely-used handwriting recognition task using a novel learned non-convex inference procedure, and (3) providing a fast and highly scalable algorithm for the challenging problem of inference in a collective graphical model applied to bird migration.Comment: minor bug fix to appendix. appeared in UAI 201

    Stochastic mirror descent dynamics and their convergence in monotone variational inequalities

    Get PDF
    We examine a class of stochastic mirror descent dynamics in the context of monotone variational inequalities (including Nash equilibrium and saddle-point problems). The dynamics under study are formulated as a stochastic differential equation driven by a (single-valued) monotone operator and perturbed by a Brownian motion. The system's controllable parameters are two variable weight sequences that respectively pre- and post-multiply the driver of the process. By carefully tuning these parameters, we obtain global convergence in the ergodic sense, and we estimate the average rate of convergence of the process. We also establish a large deviations principle showing that individual trajectories exhibit exponential concentration around this average.Comment: 23 pages; updated proofs in Section 3 and Section

    Operations on Signed Distance Functions

    Get PDF
    We present a theoretical overview of signed distance functions and analyze how this representation changes when applying an offset transformation. First, we analyze the properties of signed distance and the sets they describe. Second, we introduce our main theorem regarding the distance to an offset set in (X, || · ||) strictly normed Banach spaces. An offset set of D ⊆ X is the set of points equidistant to D. We show when such a set can be represented by f(x) − c = 0, where c 6= 0 denotes the radius of the offset. Finally, we apply these results to gain a deeper insight into offsetting surfaces defined by signed distance functions
    corecore