22,460 research outputs found
K2-ABC: Approximate Bayesian Computation with Kernel Embeddings
Complicated generative models often result in a situation where computing the
likelihood of observed data is intractable, while simulating from the
conditional density given a parameter value is relatively easy. Approximate
Bayesian Computation (ABC) is a paradigm that enables simulation-based
posterior inference in such cases by measuring the similarity between simulated
and observed data in terms of a chosen set of summary statistics. However,
there is no general rule to construct sufficient summary statistics for complex
models. Insufficient summary statistics will "leak" information, which leads to
ABC algorithms yielding samples from an incorrect (partial) posterior. In this
paper, we propose a fully nonparametric ABC paradigm which circumvents the need
for manually selecting summary statistics. Our approach, K2-ABC, uses maximum
mean discrepancy (MMD) as a dissimilarity measure between the distributions
over observed and simulated data. MMD is easily estimated as the squared
difference between their empirical kernel embeddings. Experiments on a
simulated scenario and a real-world biological problem illustrate the
effectiveness of the proposed algorithm
BayesNAS: A Bayesian Approach for Neural Architecture Search
One-Shot Neural Architecture Search (NAS) is a promising method to
significantly reduce search time without any separate training. It can be
treated as a Network Compression problem on the architecture parameters from an
over-parameterized network. However, there are two issues associated with most
one-shot NAS methods. First, dependencies between a node and its predecessors
and successors are often disregarded which result in improper treatment over
zero operations. Second, architecture parameters pruning based on their
magnitude is questionable. In this paper, we employ the classic Bayesian
learning approach to alleviate these two issues by modeling architecture
parameters using hierarchical automatic relevance determination (HARD) priors.
Unlike other NAS methods, we train the over-parameterized network for only one
epoch then update the architecture. Impressively, this enabled us to find the
architecture on CIFAR-10 within only 0.2 GPU days using a single GPU.
Competitive performance can be also achieved by transferring to ImageNet. As a
byproduct, our approach can be applied directly to compress convolutional
neural networks by enforcing structural sparsity which achieves extremely
sparse networks without accuracy deterioration.Comment: International Conference on Machine Learning 201
Approximate Bayesian Computation by Subset Simulation
A new Approximate Bayesian Computation (ABC) algorithm for Bayesian updating
of model parameters is proposed in this paper, which combines the ABC
principles with the technique of Subset Simulation for efficient rare-event
simulation, first developed in S.K. Au and J.L. Beck [1]. It has been named
ABC- SubSim. The idea is to choose the nested decreasing sequence of regions in
Subset Simulation as the regions that correspond to increasingly closer
approximations of the actual data vector in observation space. The efficiency
of the algorithm is demonstrated in two examples that illustrate some of the
challenges faced in real-world applications of ABC. We show that the proposed
algorithm outperforms other recent sequential ABC algorithms in terms of
computational efficiency while achieving the same, or better, measure of ac-
curacy in the posterior distribution. We also show that ABC-SubSim readily
provides an estimate of the evidence (marginal likelihood) for posterior model
class assessment, as a by-product
- …