1,241,269 research outputs found

    “Transfer Talk” in Talk about Writing in Progress: Two Propositions about Transfer of Learning

    Get PDF
    This article tracks the emergence of the concept of “transfer talk”—a concept distinct from transfer of learning—and teases out the implications of transfer talk for theories of transfer of learning. The concept of transfer talk was developed through a systematic examination of 30 writing center transcripts and is defined as “the talk through which individuals make visible their prior learning (in this case, about writing) or try to access the prior learning of someone else.” In addition to including a taxonomy of transfer talk and analysis of which types occur most often in this set of conferences, this article advances two propositions about the nature of transfer of learning: (1) transfer of learning may have an important social, even collaborative, component and (2) although meta-awareness about writing has long been recognized as valuable for transfer of learning, more automatized knowledge may play an important role as well

    Constrained Deep Transfer Feature Learning and its Applications

    Full text link
    Feature learning with deep models has achieved impressive results for both data representation and classification for various vision tasks. Deep feature learning, however, typically requires a large amount of training data, which may not be feasible for some application domains. Transfer learning can be one of the approaches to alleviate this problem by transferring data from data-rich source domain to data-scarce target domain. Existing transfer learning methods typically perform one-shot transfer learning and often ignore the specific properties that the transferred data must satisfy. To address these issues, we introduce a constrained deep transfer feature learning method to perform simultaneous transfer learning and feature learning by performing transfer learning in a progressively improving feature space iteratively in order to better narrow the gap between the target domain and the source domain for effective transfer of the data from the source domain to target domain. Furthermore, we propose to exploit the target domain knowledge and incorporate such prior knowledge as a constraint during transfer learning to ensure that the transferred data satisfies certain properties of the target domain. To demonstrate the effectiveness of the proposed constrained deep transfer feature learning method, we apply it to thermal feature learning for eye detection by transferring from the visible domain. We also applied the proposed method for cross-view facial expression recognition as a second application. The experimental results demonstrate the effectiveness of the proposed method for both applications.Comment: International Conference on Computer Vision and Pattern Recognition, 201

    Effectiveness of learning transfer in National Dual Training System (NDTS)

    Get PDF
    Learning transfer is the ultimate goal of any training programme. The new Malaysian skills training is based on the dual learning principle in which trainees alternate between attending theoretical classes in the skills training institute and receiving on�the-job training at worksite. This new paradigm of skills training is better known as National Dual Training System (NDTS). The main problem is that there were complaints from the employers that the competencies of the output of the skills training in Malaysia are of poor quality. This was due to low absorption of learning transfer performance from training places to workplace. Apart from that there were little studies related to the effectiveness of learning transfer due to there is no acceptable way and mean to measure the learning transfer. The aim of this study was to ascertain on whether the effectiveness of learning transfer did occurred in the automotive mechatronics of NDTS programme. The research focus area is the Mechatronics Automotive course. A longitudinal study method was employed as the research methodology. The participants of this research were the trainees and coaches from NDTS Mechatronics Automotive course. The study utilised the self�administered questionnaire, semi-structured interview, focus group and case study. Measuring 16 factors of the Learning Transfer System Inventory (LTSI) plus 3 factors derived from literature review and expert group discussion enable the researcher to determine the relationship between the learning transfer factors. It was found that NDTS training programme appears to have facilitated the positive transfer and near transfer from training situation to workplace environment. The findings of multiple regressions result suggest that the predictive variables explained 43.9% variance that has significance effect on the effectiveness of learning transfer. Apparently, the most influential dimensions of the effectiveness of learning transfer in NDTS were revealed as the course content, training delivery and working tasks. Interestingly, result indicated that the effectiveness of learning transfer in NDTS had occurred by the overall framework accuracy percentage of 79.2%. Therefore, the emerging effectiveness of learning transfer framework of NDTS in Malaysia is propose

    Learning to select data for transfer learning with Bayesian Optimization

    Full text link
    Domain similarity measures can be used to gauge adaptability and select suitable data for transfer learning, but existing approaches define ad hoc measures that are deemed suitable for respective tasks. Inspired by work on curriculum learning, we propose to \emph{learn} data selection measures using Bayesian Optimization and evaluate them across models, domains and tasks. Our learned measures outperform existing domain similarity measures significantly on three tasks: sentiment analysis, part-of-speech tagging, and parsing. We show the importance of complementing similarity with diversity, and that learned measures are -- to some degree -- transferable across models, domains, and even tasks.Comment: EMNLP 2017. Code available at: https://github.com/sebastianruder/learn-to-select-dat

    Bayesian Discovery of Multiple Bayesian Networks via Transfer Learning

    Full text link
    Bayesian network structure learning algorithms with limited data are being used in domains such as systems biology and neuroscience to gain insight into the underlying processes that produce observed data. Learning reliable networks from limited data is difficult, therefore transfer learning can improve the robustness of learned networks by leveraging data from related tasks. Existing transfer learning algorithms for Bayesian network structure learning give a single maximum a posteriori estimate of network models. Yet, many other models may be equally likely, and so a more informative result is provided by Bayesian structure discovery. Bayesian structure discovery algorithms estimate posterior probabilities of structural features, such as edges. We present transfer learning for Bayesian structure discovery which allows us to explore the shared and unique structural features among related tasks. Efficient computation requires that our transfer learning objective factors into local calculations, which we prove is given by a broad class of transfer biases. Theoretically, we show the efficiency of our approach. Empirically, we show that compared to single task learning, transfer learning is better able to positively identify true edges. We apply the method to whole-brain neuroimaging data.Comment: 10 page
    • …
    corecore