305 research outputs found

    Herding as a Learning System with Edge-of-Chaos Dynamics

    Full text link
    Herding defines a deterministic dynamical system at the edge of chaos. It generates a sequence of model states and parameters by alternating parameter perturbations with state maximizations, where the sequence of states can be interpreted as "samples" from an associated MRF model. Herding differs from maximum likelihood estimation in that the sequence of parameters does not converge to a fixed point and differs from an MCMC posterior sampling approach in that the sequence of states is generated deterministically. Herding may be interpreted as a"perturb and map" method where the parameter perturbations are generated using a deterministic nonlinear dynamical system rather than randomly from a Gumbel distribution. This chapter studies the distinct statistical characteristics of the herding algorithm and shows that the fast convergence rate of the controlled moments may be attributed to edge of chaos dynamics. The herding algorithm can also be generalized to models with latent variables and to a discriminative learning setting. The perceptron cycling theorem ensures that the fast moment matching property is preserved in the more general framework

    Bayesian Structure Learning for Markov Random Fields with a Spike and Slab Prior

    Get PDF
    In recent years a number of methods have been developed for automatically learning the (sparse) connectivity structure of Markov Random Fields. These methods are mostly based on L1-regularized optimization which has a number of disadvantages such as the inability to assess model uncertainty and expensive cross-validation to find the optimal regularization parameter. Moreover, the model's predictive performance may degrade dramatically with a suboptimal value of the regularization parameter (which is sometimes desirable to induce sparseness). We propose a fully Bayesian approach based on a "spike and slab" prior (similar to L0 regularization) that does not suffer from these shortcomings. We develop an approximate MCMC method combining Langevin dynamics and reversible jump MCMC to conduct inference in this model. Experiments show that the proposed model learns a good combination of the structure and parameter values without the need for separate hyper-parameter tuning. Moreover, the model's predictive performance is much more robust than L1-based methods with hyper-parameter settings that induce highly sparse model structures.Comment: Accepted in the Conference on Uncertainty in Artificial Intelligence (UAI), 201

    A Liouville theorem for the fractional Ginzburg-Landau equation

    Get PDF
    In this paper, we are concerned with a Liouville-type result of the nonlinear integral equation \begin{equation*} u(x)=\int_{\mathbb{R}^{n}}\frac{u(1-|u|^{2})}{|x-y|^{n-\alpha}}dy, \end{equation*} where u:Rn→Rku: \mathbb{R}^{n} \to \mathbb{R}^{k} with k≥1k \geq 1 and 1<α<n/21<\alpha<n/2. We prove that u∈L2(Rn)⇒u≡0u \in L^2(\mathbb{R}^n) \Rightarrow u \equiv 0 on Rn\mathbb{R}^n, as long as uu is a bounded and differentiable solution.Comment: 7 page

    Importance Weighting Approach in Kernel Bayes' Rule

    Full text link
    We study a nonparametric approach to Bayesian computation via feature means, where the expectation of prior features is updated to yield expected posterior features, based on regression from kernel or neural net features of the observations. All quantities involved in the Bayesian update are learned from observed data, making the method entirely model-free. The resulting algorithm is a novel instance of a kernel Bayes' rule (KBR). Our approach is based on importance weighting, which results in superior numerical stability to the existing approach to KBR, which requires operator inversion. We show the convergence of the estimator using a novel consistency analysis on the importance weighting estimator in the infinity norm. We evaluate our KBR on challenging synthetic benchmarks, including a filtering problem with a state-space model involving high dimensional image observations. The proposed method yields uniformly better empirical performance than the existing KBR, and competitive performance with other competing methods

    Strategic Outsourcing under Economies of Scale

    Get PDF
    Economies of scale in upstream production can lead both disintegrated downstream firms as well as its vertically integrated rival to outsource offshore for intermediate goods, even if offshore production has moderate cost disadvantage compared to in-house production of the vertically integrated firm
    • …
    corecore