18,869 research outputs found
-ARM: Network Sparsification via Stochastic Binary Optimization
We consider network sparsification as an -norm regularized binary
optimization problem, where each unit of a neural network (e.g., weight,
neuron, or channel, etc.) is attached with a stochastic binary gate, whose
parameters are jointly optimized with original network parameters. The
Augment-Reinforce-Merge (ARM), a recently proposed unbiased gradient estimator,
is investigated for this binary optimization problem. Compared to the hard
concrete gradient estimator from Louizos et al., ARM demonstrates superior
performance of pruning network architectures while retaining almost the same
accuracies of baseline methods. Similar to the hard concrete estimator, ARM
also enables conditional computation during model training but with improved
effectiveness due to the exact binary stochasticity. Thanks to the flexibility
of ARM, many smooth or non-smooth parametric functions, such as scaled sigmoid
or hard sigmoid, can be used to parameterize this binary optimization problem
and the unbiasness of the ARM estimator is retained, while the hard concrete
estimator has to rely on the hard sigmoid function to achieve conditional
computation and thus accelerated training. Extensive experiments on multiple
public datasets demonstrate state-of-the-art pruning rates with almost the same
accuracies of baseline methods. The resulting algorithm -ARM sparsifies
the Wide-ResNet models on CIFAR-10 and CIFAR-100 while the hard concrete
estimator cannot. The code is public available at
https://github.com/leo-yangli/l0-arm.Comment: Published as a conference paper at ECML 201
Joint state-parameter estimation of a nonlinear stochastic energy balance model from sparse noisy data
While nonlinear stochastic partial differential equations arise naturally in
spatiotemporal modeling, inference for such systems often faces two major
challenges: sparse noisy data and ill-posedness of the inverse problem of
parameter estimation. To overcome the challenges, we introduce a strongly
regularized posterior by normalizing the likelihood and by imposing physical
constraints through priors of the parameters and states. We investigate joint
parameter-state estimation by the regularized posterior in a physically
motivated nonlinear stochastic energy balance model (SEBM) for paleoclimate
reconstruction. The high-dimensional posterior is sampled by a particle Gibbs
sampler that combines MCMC with an optimal particle filter exploiting the
structure of the SEBM. In tests using either Gaussian or uniform priors based
on the physical range of parameters, the regularized posteriors overcome the
ill-posedness and lead to samples within physical ranges, quantifying the
uncertainty in estimation. Due to the ill-posedness and the regularization, the
posterior of parameters presents a relatively large uncertainty, and
consequently, the maximum of the posterior, which is the minimizer in a
variational approach, can have a large variation. In contrast, the posterior of
states generally concentrates near the truth, substantially filtering out
observation noise and reducing uncertainty in the unconstrained SEBM
Bayesian Structure Learning for Markov Random Fields with a Spike and Slab Prior
In recent years a number of methods have been developed for automatically
learning the (sparse) connectivity structure of Markov Random Fields. These
methods are mostly based on L1-regularized optimization which has a number of
disadvantages such as the inability to assess model uncertainty and expensive
cross-validation to find the optimal regularization parameter. Moreover, the
model's predictive performance may degrade dramatically with a suboptimal value
of the regularization parameter (which is sometimes desirable to induce
sparseness). We propose a fully Bayesian approach based on a "spike and slab"
prior (similar to L0 regularization) that does not suffer from these
shortcomings. We develop an approximate MCMC method combining Langevin dynamics
and reversible jump MCMC to conduct inference in this model. Experiments show
that the proposed model learns a good combination of the structure and
parameter values without the need for separate hyper-parameter tuning.
Moreover, the model's predictive performance is much more robust than L1-based
methods with hyper-parameter settings that induce highly sparse model
structures.Comment: Accepted in the Conference on Uncertainty in Artificial Intelligence
(UAI), 201
- …