2,365 research outputs found

    Learning in Markov Random Fields with Contrastive Free Energies

    Get PDF
    Learning Markov random field (MRF) models is notoriously hard due to the presence of a global normalization factor. In this paper we present a new framework for learning MRF models based on the contrastive free energy (CF) objective function. In this scheme the parameters are updated in an attempt to match the average statistics of the data distribution and a distribution which is (partially or approximately) "relaxed" to the equilibrium distribution. We show that maximum likelihood, mean field, contrastive divergence and pseudo-likelihood objectives can be understood in this paradigm. Moreover, we propose and study a new learning algorithm: the "kstep Kikuchi/Bethe approximation". This algorithm is then tested on a conditional random field model with "skip-chain" edges to model long range interactions in text data. It is demonstrated that with no loss in accuracy, the training time is brought down on average from 19 hours (BP based learning) to 83 minutes, an order of magnitude improvement

    Constant Approximation for kk-Median and kk-Means with Outliers via Iterative Rounding

    Full text link
    In this paper, we present a new iterative rounding framework for many clustering problems. Using this, we obtain an (α1+ϵ7.081+ϵ)(\alpha_1 + \epsilon \leq 7.081 + \epsilon)-approximation algorithm for kk-median with outliers, greatly improving upon the large implicit constant approximation ratio of Chen [Chen, SODA 2018]. For kk-means with outliers, we give an (α2+ϵ53.002+ϵ)(\alpha_2+\epsilon \leq 53.002 + \epsilon)-approximation, which is the first O(1)O(1)-approximation for this problem. The iterative algorithm framework is very versatile; we show how it can be used to give α1\alpha_1- and (α1+ϵ)(\alpha_1 + \epsilon)-approximation algorithms for matroid and knapsack median problems respectively, improving upon the previous best approximations ratios of 88 [Swamy, ACM Trans. Algorithms] and 17.4617.46 [Byrka et al, ESA 2015]. The natural LP relaxation for the kk-median/kk-means with outliers problem has an unbounded integrality gap. In spite of this negative result, our iterative rounding framework shows that we can round an LP solution to an almost-integral solution of small cost, in which we have at most two fractionally open facilities. Thus, the LP integrality gap arises due to the gap between almost-integral and fully-integral solutions. Then, using a pre-processing procedure, we show how to convert an almost-integral solution to a fully-integral solution losing only a constant-factor in the approximation ratio. By further using a sparsification technique, the additive factor loss incurred by the conversion can be reduced to any ϵ>0\epsilon > 0

    Mark K. Somogye, Plaintiff, v. Toledo Clinic, Inc., Defendant.

    Get PDF

    My shelfie... Michelle K. Ryan

    Get PDF

    A General Theory of Equivariant CNNs on Homogeneous Spaces

    Get PDF
    We present a general theory of Group equivariant Convolutional Neural Networks (G-CNNs) on homogeneous spaces such as Euclidean space and the sphere. Feature maps in these networks represent fields on a homogeneous base space, and layers are equivariant maps between spaces of fields. The theory enables a systematic classification of all existing G-CNNs in terms of their symmetry group, base space, and field type. We also consider a fundamental question: what is the most general kind of equivariant linear map between feature spaces (fields) of given types? Following Mackey, we show that such maps correspond one-to-one with convolutions using equivariant kernels, and characterize the space of such kernels
    corecore