19,924 research outputs found

    Bayesian Transfer Learning for the Prediction of Self-reported Well-being Scores

    Get PDF
    Predicting the severity and onset of depressive symptoms is of great importance. User-specific models have better performance than a general model but require significant amounts of training data from each individual, which is often impractical to obtain. Even when this is possible, there is a significant lag between the beginning of the data-collection phase and when the system is completely trained and thus able to start making useful predictions. In this study, we propose a transfer learning Bayesian modelling method based on a Markov Chain Monte Carlo (MCMC) sampler and Bayesian model averaging for dealing with the challenge of building user-specific predictive models able to make predictions of self-reported well-being scores with limited sparse training data. The evaluation of our method using real-world data collected within the NEVERMIND project showed a better predictive performance for the transfer learning model compared to conventional learning with no transfer

    Dropout Distillation for Efficiently Estimating Model Confidence

    Full text link
    We propose an efficient way to output better calibrated uncertainty scores from neural networks. The Distilled Dropout Network (DDN) makes standard (non-Bayesian) neural networks more introspective by adding a new training loss which prevents them from being overconfident. Our method is more efficient than Bayesian neural networks or model ensembles which, despite providing more reliable uncertainty scores, are more cumbersome to train and slower to test. We evaluate DDN on the the task of image classification on the CIFAR-10 dataset and show that our calibration results are competitive even when compared to 100 Monte Carlo samples from a dropout network while they also increase the classification accuracy. We also propose better calibration within the state of the art Faster R-CNN object detection framework and show, using the COCO dataset, that DDN helps train better calibrated object detectors

    Digging Deeper into Egocentric Gaze Prediction

    Full text link
    This paper digs deeper into factors that influence egocentric gaze. Instead of training deep models for this purpose in a blind manner, we propose to inspect factors that contribute to gaze guidance during daily tasks. Bottom-up saliency and optical flow are assessed versus strong spatial prior baselines. Task-specific cues such as vanishing point, manipulation point, and hand regions are analyzed as representatives of top-down information. We also look into the contribution of these factors by investigating a simple recurrent neural model for ego-centric gaze prediction. First, deep features are extracted for all input video frames. Then, a gated recurrent unit is employed to integrate information over time and to predict the next fixation. We also propose an integrated model that combines the recurrent model with several top-down and bottom-up cues. Extensive experiments over multiple datasets reveal that (1) spatial biases are strong in egocentric videos, (2) bottom-up saliency models perform poorly in predicting gaze and underperform spatial biases, (3) deep features perform better compared to traditional features, (4) as opposed to hand regions, the manipulation point is a strong influential cue for gaze prediction, (5) combining the proposed recurrent model with bottom-up cues, vanishing points and, in particular, manipulation point results in the best gaze prediction accuracy over egocentric videos, (6) the knowledge transfer works best for cases where the tasks or sequences are similar, and (7) task and activity recognition can benefit from gaze prediction. Our findings suggest that (1) there should be more emphasis on hand-object interaction and (2) the egocentric vision community should consider larger datasets including diverse stimuli and more subjects.Comment: presented at WACV 201

    Well-being Forecasting using a Parametric Transfer-Learning method based on the Fisher Divergence and Hamiltonian Monte Carlo

    Get PDF
    INTRODUCTION: Traditional personalised modelling typically requires sufficient personal data for training. This is a challenge in healthcare contexts, e.g. when using smartphones to predict well-being. OBJECTIVE: A method to produce incremental patient-specific models and forecasts even in the early stages of data collection when the data are sporadic and limited. METHODS: We propose a parametric transfer-learning method based on the Fisher divergence, where information from other patients is injected as a prior term into a Hamiltonian Monte Carlo framework. We test our method on the NEVERMIND dataset of self-reported well-being scores. RESULTS: Out of 54 scenarios representing varying training/forecasting lengths and competing methods, our method achieved overall best performance in 50 (92.6%) and demonstrated a significant median difference in45 (83.3%). CONCLUSION: The method performs favourably overall, particularly when long-term forecasts are required given short-term data
    • …
    corecore