11,306 research outputs found
Consistent Second-Order Conic Integer Programming for Learning Bayesian Networks
Bayesian Networks (BNs) represent conditional probability relations among a
set of random variables (nodes) in the form of a directed acyclic graph (DAG),
and have found diverse applications in knowledge discovery. We study the
problem of learning the sparse DAG structure of a BN from continuous
observational data. The central problem can be modeled as a mixed-integer
program with an objective function composed of a convex quadratic loss function
and a regularization penalty subject to linear constraints. The optimal
solution to this mathematical program is known to have desirable statistical
properties under certain conditions. However, the state-of-the-art optimization
solvers are not able to obtain provably optimal solutions to the existing
mathematical formulations for medium-size problems within reasonable
computational times. To address this difficulty, we tackle the problem from
both computational and statistical perspectives. On the one hand, we propose a
concrete early stopping criterion to terminate the branch-and-bound process in
order to obtain a near-optimal solution to the mixed-integer program, and
establish the consistency of this approximate solution. On the other hand, we
improve the existing formulations by replacing the linear "big-" constraints
that represent the relationship between the continuous and binary indicator
variables with second-order conic constraints. Our numerical results
demonstrate the effectiveness of the proposed approaches
Bayesian network learning with cutting planes
The problem of learning the structure of Bayesian networks from complete
discrete data with a limit on parent set size is considered. Learning is cast
explicitly as an optimisation problem where the goal is to find a BN structure
which maximises log marginal likelihood (BDe score). Integer programming,
specifically the SCIP framework, is used to solve this optimisation problem.
Acyclicity constraints are added to the integer program (IP) during solving in
the form of cutting planes. Finding good cutting planes is the key to the
success of the approach -the search for such cutting planes is effected using a
sub-IP. Results show that this is a particularly fast method for exact BN
learning
Neural Architecture Search using Deep Neural Networks and Monte Carlo Tree Search
Neural Architecture Search (NAS) has shown great success in automating the
design of neural networks, but the prohibitive amount of computations behind
current NAS methods requires further investigations in improving the sample
efficiency and the network evaluation cost to get better results in a shorter
time. In this paper, we present a novel scalable Monte Carlo Tree Search (MCTS)
based NAS agent, named AlphaX, to tackle these two aspects. AlphaX improves the
search efficiency by adaptively balancing the exploration and exploitation at
the state level, and by a Meta-Deep Neural Network (DNN) to predict network
accuracies for biasing the search toward a promising region. To amortize the
network evaluation cost, AlphaX accelerates MCTS rollouts with a distributed
design and reduces the number of epochs in evaluating a network by transfer
learning, which is guided with the tree structure in MCTS. In 12 GPU days and
1000 samples, AlphaX found an architecture that reaches 97.84\% top-1 accuracy
on CIFAR-10, and 75.5\% top-1 accuracy on ImageNet, exceeding SOTA NAS methods
in both the accuracy and sampling efficiency. Particularly, we also evaluate
AlphaX on NASBench-101, a large scale NAS dataset; AlphaX is 3x and 2.8x more
sample efficient than Random Search and Regularized Evolution in finding the
global optimum. Finally, we show the searched architecture improves a variety
of vision applications from Neural Style Transfer, to Image Captioning and
Object Detection.Comment: To appear in the Thirty-Fourth AAAI conference on Artificial
Intelligence (AAAI-2020
- …