4 research outputs found
Constrained Deep Transfer Feature Learning and its Applications
Feature learning with deep models has achieved impressive results for both
data representation and classification for various vision tasks. Deep feature
learning, however, typically requires a large amount of training data, which
may not be feasible for some application domains. Transfer learning can be one
of the approaches to alleviate this problem by transferring data from data-rich
source domain to data-scarce target domain. Existing transfer learning methods
typically perform one-shot transfer learning and often ignore the specific
properties that the transferred data must satisfy. To address these issues, we
introduce a constrained deep transfer feature learning method to perform
simultaneous transfer learning and feature learning by performing transfer
learning in a progressively improving feature space iteratively in order to
better narrow the gap between the target domain and the source domain for
effective transfer of the data from the source domain to target domain.
Furthermore, we propose to exploit the target domain knowledge and incorporate
such prior knowledge as a constraint during transfer learning to ensure that
the transferred data satisfies certain properties of the target domain. To
demonstrate the effectiveness of the proposed constrained deep transfer feature
learning method, we apply it to thermal feature learning for eye detection by
transferring from the visible domain. We also applied the proposed method for
cross-view facial expression recognition as a second application. The
experimental results demonstrate the effectiveness of the proposed method for
both applications.Comment: International Conference on Computer Vision and Pattern Recognition,
201
Recommended from our members
Reviewing the current state of machine learning for artificial intelligence with regards to the use of contextual information
This paper will consider the current state of Machine Learning for Artificial Intelligence, more specifically for applications, such as: Speech Recognition, Game Playing and Image Processing. The artificial world tends to make limited use of context in comparison to what currently happens in human life, while it would benefit from improvements in this area. Additionally, the process of transferring knowledge between application domains is another important area where artificial system can improve. Using context and transferability would have several potential benefits, such as: better ability to function in multiple problem domains, improved understanding of human interaction and stronger grasping of current and potential future situations. While these items are all quite usual to us humans, it is particularly challenging to integrate them into artificial systems, as will be shown within this review. The limitations of our current systems with regards to these topics and the achievable improvements, if these items would be addressed, will also be covered. It is expected that by utilising transferability and/or context, many algorithms in the artificial intelligence field will be able to expand their functionality considerably and should provide for more general purpose learning algorithms
Finding Quantum Critical Points with Neural-Network Quantum States
Finding the precise location of quantum critical points is of particular
importance to characterise quantum many-body systems at zero temperature.
However, quantum many-body systems are notoriously hard to study because the
dimension of their Hilbert space increases exponentially with their size.
Recently, machine learning tools known as neural-network quantum states have
been shown to effectively and efficiently simulate quantum many-body systems.
We present an approach to finding the quantum critical points of the quantum
Ising model using neural-network quantum states, analytically constructed
innate restricted Boltzmann machines, transfer learning and unsupervised
learning. We validate the approach and evaluate its efficiency and
effectiveness in comparison with other traditional approaches.Comment: 19 pages, 12 figures, extended version of an accepted paper at the
24th European Conference on Artificial Intelligence (ECAI 2020