698,736 research outputs found

    Constrained Deep Transfer Feature Learning and its Applications

    Full text link
    Feature learning with deep models has achieved impressive results for both data representation and classification for various vision tasks. Deep feature learning, however, typically requires a large amount of training data, which may not be feasible for some application domains. Transfer learning can be one of the approaches to alleviate this problem by transferring data from data-rich source domain to data-scarce target domain. Existing transfer learning methods typically perform one-shot transfer learning and often ignore the specific properties that the transferred data must satisfy. To address these issues, we introduce a constrained deep transfer feature learning method to perform simultaneous transfer learning and feature learning by performing transfer learning in a progressively improving feature space iteratively in order to better narrow the gap between the target domain and the source domain for effective transfer of the data from the source domain to target domain. Furthermore, we propose to exploit the target domain knowledge and incorporate such prior knowledge as a constraint during transfer learning to ensure that the transferred data satisfies certain properties of the target domain. To demonstrate the effectiveness of the proposed constrained deep transfer feature learning method, we apply it to thermal feature learning for eye detection by transferring from the visible domain. We also applied the proposed method for cross-view facial expression recognition as a second application. The experimental results demonstrate the effectiveness of the proposed method for both applications.Comment: International Conference on Computer Vision and Pattern Recognition, 201

    Transferring and creating technological knowledge in interfirm R&D relationships: The initiation and evolution of interfirm learning.

    Get PDF
    In this study, we examine the initiation and evolution of interfirm learning in interfirm R&D relationships. Based on in-depth case studies, we suggest that the process of learning in interfirm R&D relationships consists of different challenges: 1) initiating technological knowledge transfer, 2) continuing technological knowledge transfer, and 3) moving towards the joint creation of new technological knowledge. Our findings identify conditions needed to initiate knowledge transfer: the presence of legal knowledge transfer clauses, overlapping skills and equipment, fragile trust and organizational similarity. The continuance of knowledge exchange implies complementary modes of collaborating characterized by sharing technologies which are oriented towards different applications. Joint knowledge creation implies convergence on the level of applications which only becomes feasible when prior knowledge exchange processes have generated resilient levels of trust. These observations point to the relevance of conceiving and organizing interfirm R&D relationships in a timephased, differentiated manner.Applications; Case studies; Convergence; Exchange; Interfirm learning; Interfirm R&D; Knowledge; Knowledge creation; Knowledge transfer; Learning; Processes; R&D; Similarity; Studies; Technology; Trust;

    Integrating dynamic stopping, transfer learning and language models in an adaptive zero-training ERP speller

    Get PDF
    Objective. Most BCIs have to undergo a calibration session in which data is recorded to train decoders with machine learning. Only recently zero-training methods have become a subject of study. This work proposes a probabilistic framework for BCI applications which exploit event-related potentials (ERPs). For the example of a visual P300 speller we show how the framework harvests the structure suitable to solve the decoding task by (a) transfer learning, (b) unsupervised adaptation, (c) language model and (d) dynamic stopping. Approach. A simulation study compares the proposed probabilistic zero framework (using transfer learning and task structure) to a state-of-the-art supervised model on n = 22 subjects. The individual influence of the involved components (a)–(d) are investigated. Main results. Without any need for a calibration session, the probabilistic zero-training framework with inter-subject transfer learning shows excellent performance—competitive to a state-of-the-art supervised method using calibration. Its decoding quality is carried mainly by the effect of transfer learning in combination with continuous unsupervised adaptation. Significance. A high-performing zero-training BCI is within reach for one of the most popular BCI paradigms: ERP spelling. Recording calibration data for a supervised BCI would require valuable time which is lost for spelling. The time spent on calibration would allow a novel user to spell 29 symbols with our unsupervised approach. It could be of use for various clinical and non-clinical ERP-applications of BCI
    • …
    corecore