2,515 research outputs found
Personalizing gesture recognition using hierarchical bayesian neural networks
Building robust classifiers trained on data susceptible to group or subject-specific variations is a challenging pattern recognition problem. We develop hierarchical Bayesian neural networks to capture subject-specific variations and share statistical strength across subjects. Leveraging recent work on learning Bayesian neural networks, we build fast, scalable algorithms for inferring the posterior distribution over all network weights in the hierarchy. We also develop methods for adapting our model to new subjects when a small number of subject-specific personalization data is available. Finally, we investigate active learning algorithms for interactively labeling personalization data in resource-constrained scenarios. Focusing on the problem of gesture recognition where inter-subject variations are commonplace, we demonstrate the effectiveness of our proposed techniques. We test our framework on three widely used gesture recognition datasets, achieving personalization performance competitive with the state-of-the-art.http://openaccess.thecvf.com/content_cvpr_2017/html/Joshi_Personalizing_Gesture_Recognition_CVPR_2017_paper.htmlhttp://openaccess.thecvf.com/content_cvpr_2017/html/Joshi_Personalizing_Gesture_Recognition_CVPR_2017_paper.htmlhttp://openaccess.thecvf.com/content_cvpr_2017/html/Joshi_Personalizing_Gesture_Recognition_CVPR_2017_paper.htmlPublished versio
Concrete Dropout
Dropout is used as a practical tool to obtain uncertainty estimates in large
vision models and reinforcement learning (RL) tasks. But to obtain
well-calibrated uncertainty estimates, a grid-search over the dropout
probabilities is necessary - a prohibitive operation with large models, and an
impossible one with RL. We propose a new dropout variant which gives improved
performance and better calibrated uncertainties. Relying on recent developments
in Bayesian deep learning, we use a continuous relaxation of dropout's discrete
masks. Together with a principled optimisation objective, this allows for
automatic tuning of the dropout probability in large models, and as a result
faster experimentation cycles. In RL this allows the agent to adapt its
uncertainty dynamically as more data is observed. We analyse the proposed
variant extensively on a range of tasks, and give insights into common practice
in the field where larger dropout probabilities are often used in deeper model
layers
Recommended from our members
Depth uncertainty in neural networks
Existing methods for estimating uncertainty in deep learning tend to require multiple forward passes, making them unsuitable for applications where computational resources are limited. To solve this, we perform probabilistic reasoning over the depth of neural networks. Different depths correspond to subnetworks which share weights and whose predictions are combined via marginalisation, yielding model uncertainty. By exploiting the sequential structure of feed-forward networks, we are able to both evaluate our training objective and make predictions with a single forward pass. We validate our approach on real-world regression and image classification tasks. Our approach provides uncertainty calibration, robustness to dataset shift, and accuracies competitive with more computationally expensive baselines
Specifying Weight Priors in Bayesian Deep Neural Networks with Empirical Bayes
Stochastic variational inference for Bayesian deep neural network (DNN)
requires specifying priors and approximate posterior distributions over neural
network weights. Specifying meaningful weight priors is a challenging problem,
particularly for scaling variational inference to deeper architectures
involving high dimensional weight space. We propose MOdel Priors with Empirical
Bayes using DNN (MOPED) method to choose informed weight priors in Bayesian
neural networks. We formulate a two-stage hierarchical modeling, first find the
maximum likelihood estimates of weights with DNN, and then set the weight
priors using empirical Bayes approach to infer the posterior with variational
inference. We empirically evaluate the proposed approach on real-world tasks
including image classification, video activity recognition and audio
classification with varying complex neural network architectures. We also
evaluate our proposed approach on diabetic retinopathy diagnosis task and
benchmark with the state-of-the-art Bayesian deep learning techniques. We
demonstrate MOPED method enables scalable variational inference and provides
reliable uncertainty quantification.Comment: To be published at AAAI 2020 conferenc
- …