106,803 research outputs found
Constrained Deep Transfer Feature Learning and its Applications
Feature learning with deep models has achieved impressive results for both
data representation and classification for various vision tasks. Deep feature
learning, however, typically requires a large amount of training data, which
may not be feasible for some application domains. Transfer learning can be one
of the approaches to alleviate this problem by transferring data from data-rich
source domain to data-scarce target domain. Existing transfer learning methods
typically perform one-shot transfer learning and often ignore the specific
properties that the transferred data must satisfy. To address these issues, we
introduce a constrained deep transfer feature learning method to perform
simultaneous transfer learning and feature learning by performing transfer
learning in a progressively improving feature space iteratively in order to
better narrow the gap between the target domain and the source domain for
effective transfer of the data from the source domain to target domain.
Furthermore, we propose to exploit the target domain knowledge and incorporate
such prior knowledge as a constraint during transfer learning to ensure that
the transferred data satisfies certain properties of the target domain. To
demonstrate the effectiveness of the proposed constrained deep transfer feature
learning method, we apply it to thermal feature learning for eye detection by
transferring from the visible domain. We also applied the proposed method for
cross-view facial expression recognition as a second application. The
experimental results demonstrate the effectiveness of the proposed method for
both applications.Comment: International Conference on Computer Vision and Pattern Recognition,
201
Constrained Design of Deep Iris Networks
Despite the promise of recent deep neural networks in the iris recognition
setting, there are vital properties of the classic IrisCode which are almost
unable to be achieved with current deep iris networks: the compactness of model
and the small number of computing operations (FLOPs). This paper re-models the
iris network design process as a constrained optimization problem which takes
model size and computation into account as learning criteria. On one hand, this
allows us to fully automate the network design process to search for the best
iris network confined to the computation and model compactness constraints. On
the other hand, it allows us to investigate the optimality of the classic
IrisCode and recent iris networks. It also allows us to learn an optimal iris
network and demonstrate state-of-the-art performance with less computation and
memory requirements
- …