16 research outputs found
Tiered Pruning for Efficient Differentialble Inference-Aware Neural Architecture Search
We propose three novel pruning techniques to improve the cost and results of
inference-aware Differentiable Neural Architecture Search (DNAS). First, we
introduce , a stochastic bi-path building block for DNAS, which can search over
inner hidden dimensions with memory and compute complexity. Second, we present
an algorithm for pruning blocks within a stochastic layer of the SuperNet
during the search. Third, we describe a novel technique for pruning unnecessary
stochastic layers during the search. The optimized models resulting from the
search are called PruNet and establishes a new state-of-the-art Pareto frontier
for NVIDIA V100 in terms of inference latency for ImageNet Top-1 image
classification accuracy. PruNet as a backbone also outperforms GPUNet and
EfficientNet on the COCO object detection task on inference latency relative to
mean Average Precision (mAP)
Constrained Optimization via Exact Augmented Lagrangian and Randomized Iterative Sketching
We consider solving equality-constrained nonlinear, nonconvex optimization
problems. This class of problems appears widely in a variety of applications in
machine learning and engineering, ranging from constrained deep neural
networks, to optimal control, to PDE-constrained optimization. We develop an
adaptive inexact Newton method for this problem class. In each iteration, we
solve the Lagrangian Newton system inexactly via a randomized iterative
sketching solver, and select a suitable stepsize by performing line search on
an exact augmented Lagrangian merit function. The randomized solvers have
advantages over deterministic linear system solvers by significantly reducing
per-iteration flops complexity and storage cost, when equipped with suitable
sketching matrices. Our method adaptively controls the accuracy of the
randomized solver and the penalty parameters of the exact augmented Lagrangian,
to ensure that the inexact Newton direction is a descent direction of the exact
augmented Lagrangian. This allows us to establish a global almost sure
convergence. We also show that a unit stepsize is admissible locally, so that
our method exhibits a local linear convergence. Furthermore, we prove that the
linear convergence can be strengthened to superlinear convergence if we
gradually sharpen the adaptive accuracy condition on the randomized solver. We
demonstrate the superior performance of our method on benchmark nonlinear
problems in CUTEst test set, constrained logistic regression with data from
LIBSVM, and a PDE-constrained problem.Comment: 25 pages, 4 figure