289,965 research outputs found
Recommended from our members
Learning perceptual schemas to avoid the utility problem
This paper describes principles for representing and organising planning knowledge in a machine learning architecture. One of the difficulties with learning about tasks requiring planning is the utility problem: as more knowledge is acquired by the learner, the utilisation of that knowledge takes on a complexity which overwhelms the mechanisms of the original task. This problem does not, however, occur with human learners: on the contrary, it is usually the case that, the more knowledgeable the learner, the greater the efficiency and accuracy in locating a solution. The reason for this lies in the types of knowledge acquired by the human learner and its organisation. We describe the basic representations which underlie the superior abilities of human experts, and describe algorithms for using equivalent representations in a machine learning architecture
Brain-mediated Transfer Learning of Convolutional Neural Networks
The human brain can effectively learn a new task from a small number of
samples, which indicate that the brain can transfer its prior knowledge to
solve tasks in different domains. This function is analogous to transfer
learning (TL) in the field of machine learning. TL uses a well-trained feature
space in a specific task domain to improve performance in new tasks with
insufficient training data. TL with rich feature representations, such as
features of convolutional neural networks (CNNs), shows high generalization
ability across different task domains. However, such TL is still insufficient
in making machine learning attain generalization ability comparable to that of
the human brain. To examine if the internal representation of the brain could
be used to achieve more efficient TL, we introduce a method for TL mediated by
human brains. Our method transforms feature representations of audiovisual
inputs in CNNs into those in activation patterns of individual brains via their
association learned ahead using measured brain responses. Then, to estimate
labels reflecting human cognition and behavior induced by the audiovisual
inputs, the transformed representations are used for TL. We demonstrate that
our brain-mediated TL (BTL) shows higher performance in the label estimation
than the standard TL. In addition, we illustrate that the estimations mediated
by different brains vary from brain to brain, and the variability reflects the
individual variability in perception. Thus, our BTL provides a framework to
improve the generalization ability of machine-learning feature representations
and enable machine learning to estimate human-like cognition and behavior,
including individual variability
Human and Machine Representations of Knowledge
Four ex1st1ng Knowledge-representations for the computat1on
of s1m1lar functions 1n a chess endgame were 1mplemented on the
same computer 1n the same language. They are compared w1th
respect to effic1ency regard1ng time-space requirements.
Three of these programs were then paraphrased 1nto English
and all four were studied for their feasibility as 'open book'
advice texts for the human beginner in chess. A formally verified
set of rules was also tested for its suitability as an advice
text. The possible effectiveness of these advice texts in
'closed book' form is considered.
The above experiments comprise a case study of a phenomenon
known as the "human window". This phenomenon mot1vated an
analysis of four documented instances of mismatch between human
and machine representations. These are:
Three Mile Island
II Air Traffic Control,
III NORAD Mil1tary Computer System,
IV The Hoogoven Royal Dutch Steel automation failur
Unsupervised word embeddings capture latent knowledge from materials science literature.
The overwhelming majority of scientific knowledge is published as text, which is difficult to analyse by either traditional statistical analysis or modern machine learning methods. By contrast, the main source of machine-interpretable data for the materials research community has come from structured property databases1,2, which encompass only a small fraction of the knowledge present in the research literature. Beyond property values, publications contain valuable knowledge regarding the connections and relationships between data items as interpreted by the authors. To improve the identification and use of this knowledge, several studies have focused on the retrieval of information from scientific literature using supervised natural language processing3-10, which requires large hand-labelled datasets for training. Here we show that materials science knowledge present in the published literature can be efficiently encoded as information-dense word embeddings11-13 (vector representations of words) without human labelling or supervision. Without any explicit insertion of chemical knowledge, these embeddings capture complex materials science concepts such as the underlying structure of the periodic table and structure-property relationships in materials. Furthermore, we demonstrate that an unsupervised method can recommend materials for functional applications several years before their discovery. This suggests that latent knowledge regarding future discoveries is to a large extent embedded in past publications. Our findings highlight the possibility of extracting knowledge and relationships from the massive body of scientific literature in a collective manner, and point towards a generalized approach to the mining of scientific literature
Relation learning in a neurocomputational architecture supports cross-domain transfer
Humans readily generalize, applying prior knowledge to novelsituations and stimuli. Advances in machine learning have be-gun to approximate and even surpass human performance, butthese systems struggle to generalize what they have learnedto untrained situations. We present a model based on well-established neurocomputational principles that demonstrateshuman-level generalisation. This model is trained to play onevideo game (Breakout) and performs one-shot generalisationto a new game (Pong) with different characteristics. The modelgeneralizes because it learns structured representations that arefunctionally symbolic (viz., a role-filler binding calculus) fromunstructured training data. It does so without feedback, andwithout requiring that structured representations are specifieda priori. Specifically, the model uses neural co-activation todiscover which characteristics of the input are invariant and tolearn relational predicates, and oscillatory regularities in net-work firing to bind predicates to arguments. To our knowledge,this is the first demonstration of human-like generalisation ina machine system that does not assume structured representa-tions to begin with
- …