49,844 research outputs found
Block Belief Propagation for Parameter Learning in Markov Random Fields
Traditional learning methods for training Markov random fields require doing
inference over all variables to compute the likelihood gradient. The iteration
complexity for those methods therefore scales with the size of the graphical
models. In this paper, we propose \emph{block belief propagation learning}
(BBPL), which uses block-coordinate updates of approximate marginals to compute
approximate gradients, removing the need to compute inference on the entire
graphical model. Thus, the iteration complexity of BBPL does not scale with the
size of the graphs. We prove that the method converges to the same solution as
that obtained by using full inference per iteration, despite these
approximations, and we empirically demonstrate its scalability improvements
over standard training methods.Comment: Accepted to AAAI 201
High-Dimensional Private Empirical Risk Minimization by Greedy Coordinate Descent
In this paper, we study differentially private empirical risk minimization
(DP-ERM). It has been shown that the worst-case utility of DP-ERM reduces
polynomially as the dimension increases. This is a major obstacle to privately
learning large machine learning models. In high dimension, it is common for
some model's parameters to carry more information than others. To exploit this,
we propose a differentially private greedy coordinate descent (DP-GCD)
algorithm. At each iteration, DP-GCD privately performs a coordinate-wise
gradient step along the gradients' (approximately) greatest entry. We show
theoretically that DP-GCD can achieve a logarithmic dependence on the dimension
for a wide range of problems by naturally exploiting their structural
properties (such as quasi-sparse solutions). We illustrate this behavior
numerically, both on synthetic and real datasets
- …