23,454 research outputs found

    Stochastic optimization by message passing

    Full text link
    Most optimization problems in applied sciences realistically involve uncertainty in the parameters defining the cost function, of which only statistical information is known beforehand. In a recent work we introduced a message passing algorithm based on the cavity method of statistical physics to solve the two-stage matching problem with independently distributed stochastic parameters. In this paper we provide an in-depth explanation of the general method and caveats, show the details of the derivation and resulting algorithm for the matching problem and apply it to a stochastic version of the independent set problem, which is a computationally hard and relevant problem in communication networks. We compare the results with some greedy algorithms and briefly discuss the extension to more complicated stochastic multi-stage problems.Comment: 31 pages, 8 figure

    Asynchronous Stochastic Variational Inference

    Get PDF
    Stochastic variational inference (SVI) employs stochastic optimization to scale up Bayesian computation to massive data. Since SVI is at its core a stochastic gradient-based algorithm, horizontal parallelism can be harnessed to allow larger scale inference. We propose a lock-free parallel implementation for SVI which allows distributed computations over multiple slaves in an asynchronous style. We show that our implementation leads to linear speed-up while guaranteeing an asymptotic ergodic convergence rate O(1/√T) given that the number of slaves is bounded by √T (T is the total number of iterations). The implementation is done in a high-performance computing (HPC) environment using message passing interface (MPI) for python (MPI4py). The extensive empirical evaluation shows that our parallel SVI is lossless, performing comparably well to its counterpart serial SVI with linear speed-up

    Containing epidemic outbreaks by message-passing techniques

    Get PDF
    The problem of targeted network immunization can be defined as the one of finding a subset of nodes in a network to immunize or vaccinate in order to minimize a tradeoff between the cost of vaccination and the final (stationary) expected infection under a given epidemic model. Although computing the expected infection is a hard computational problem, simple and efficient mean-field approximations have been put forward in the literature in recent years. The optimization problem can be recast into a constrained one in which the constraints enforce local mean-field equations describing the average stationary state of the epidemic process. For a wide class of epidemic models, including the susceptible-infected-removed and the susceptible-infected-susceptible models, we define a message-passing approach to network immunization that allows us to study the statistical properties of epidemic outbreaks in the presence of immunized nodes as well as to find (nearly) optimal immunization sets for a given choice of parameters and costs. The algorithm scales linearly with the size of the graph and it can be made efficient even on large networks. We compare its performance with topologically based heuristics, greedy methods, and simulated annealing

    Stochastic Optimization of Service Provision with Selfish Users

    Get PDF
    We develop a computationally efficient technique to solve a fairly general distributed service provision problem with selfish users and imperfect information. In particular, in a context in which the service capacity of the existing infrastructure can be partially adapted to the user load by activating just some of the service units, we aim at finding the configuration of active service units that achieves the best trade-off between maintenance (e.g.\ energetic) costs for the provider and user satisfaction. The core of our technique resides in the implementation of a belief-propagation (BP) algorithm to evaluate the cost configurations. Numerical results confirm the effectiveness of our approach.Comment: paper presented at NETSTAT Workshop, Budapest - June 201

    Structured Learning via Logistic Regression

    Full text link
    A successful approach to structured learning is to write the learning objective as a joint function of linear parameters and inference messages, and iterate between updates to each. This paper observes that if the inference problem is "smoothed" through the addition of entropy terms, for fixed messages, the learning objective reduces to a traditional (non-structured) logistic regression problem with respect to parameters. In these logistic regression problems, each training example has a bias term determined by the current set of messages. Based on this insight, the structured energy function can be extended from linear factors to any function class where an "oracle" exists to minimize a logistic loss.Comment: Advances in Neural Information Processing Systems 201
    corecore