23,903 research outputs found

    Comparing Expectations and Outcomes: Application to UK Data

    Get PDF
    The validity of the rational expectations hypothesis is explored using 12 years direct individual expectations data derived from the BHPS. The usage of micro data drives off the possibility of spurious rejections caused by the existence of micro-heterogeneity. And the 12 years BHPS micro panel data can release the average-out problem in a comparatively short term micro panel data. In short, I test if the individual expectations are unbiased and efficient in a comparatively long term in this paper. As a result, expectations errors are found to be biased and inefficient. Furthermore, the hypothesis that expectations errors are random is investigated by exploring the existence of systematic components in expectations errors. There exists the micro-heterogeneity among different types of respondents. Also, the factors that significantly affect individual’s expectations are identified.rational expectations; systematic heterogeneity; forecast errors; rational expectations hypothesis; subjective

    Borrowing Treasures from the Wealthy: Deep Transfer Learning through Selective Joint Fine-tuning

    Get PDF
    Deep neural networks require a large amount of labeled training data during supervised learning. However, collecting and labeling so much data might be infeasible in many cases. In this paper, we introduce a source-target selective joint fine-tuning scheme for improving the performance of deep learning tasks with insufficient training data. In this scheme, a target learning task with insufficient training data is carried out simultaneously with another source learning task with abundant training data. However, the source learning task does not use all existing training data. Our core idea is to identify and use a subset of training images from the original source learning task whose low-level characteristics are similar to those from the target learning task, and jointly fine-tune shared convolutional layers for both tasks. Specifically, we compute descriptors from linear or nonlinear filter bank responses on training images from both tasks, and use such descriptors to search for a desired subset of training samples for the source learning task. Experiments demonstrate that our selective joint fine-tuning scheme achieves state-of-the-art performance on multiple visual classification tasks with insufficient training data for deep learning. Such tasks include Caltech 256, MIT Indoor 67, Oxford Flowers 102 and Stanford Dogs 120. In comparison to fine-tuning without a source domain, the proposed method can improve the classification accuracy by 2% - 10% using a single model.Comment: To appear in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2017
    • …
    corecore